API Evaluation

API is the initials of “Application Programming Interface”. APIs are bundles of interfaces that developers must implement to build their applications. Common APIs are the Java, Python, and Ruby APIs, as well as the Android, iOS APIs and many other third-party libraries (e.g. jQuery and Google maps). Except for their source code, APIs come with their documentation. Then, client developers, from different programming levels, read this documentation to build distinct applications that use the same APIs. This means that an API should be unambiguous and useful in order to prevent developers from writing applications susceptible to crashes.

Although there are metrics for the evaluation of the quality of the source code (see [3]), there are not clearly defined metrics for the evaluation of the APIs—and specifically for the APIs’ documentation. Common methods to evaluate an API can include: 1) the study of its methods’ pre-conditions and post-conditions, 2) fuzzy testing (see [2]), through the passing of random or invalid values in the arguments of a method in order to find bugs and design defects, and 3) the analysis of crash and bug reports from client applications that use specific APIs. In addition, concerning the evaluation of the API reference, it can be conducted through qualitative studies (i.e. questionnaires and interviews to the developers of client applications), regarding usability and learnability issues (see [4]).

Given that API designers know their source code and they are proficient programmers, they could assume that many elements of an API are self-evident. An average developer though could find the same API documentation incomplete. For this, there are tools, such as Javadoc, that automatically produce documentation from the source code and its comments. However, API designers can still miss important elements in the source code, resulting in insufficient documentation. A characteristic example of this is the use of the unchecked exceptions in the API (see [1]).

Finally, one could say that there is a need for more automation regarding the design and documentation of an API. This can be achieved through rigorous testing, as well as through often design and implementation iterations, in accordance with developers’ needs.

[1] J. Bloch. 2006. How to design a good API and why it matters. In Companion to the 21st ACM SIGPLAN symposium on Object-oriented programming systems, languages, and applications (OOPSLA ’06). ACM, New York, NY, USA, 506-507.

[2] J. Forrester and B. Miller. 2000. An empirical study of the robustness of Windows NT applications using random testing. In Proceedings of the 4th conference on USENIX Windows Systems Symposium – Volume 4 (WSS’00), Vol. 4. USENIX Association, Berkeley, CA, USA, 6-6.

[3] S. Kan. 2003. Metrics and Models in Software Quality Engineering. Addison-Wesley. ISBN 0-201-72915-6.

[4] M. Robillard and R. DeLine. A field study of API learning obstacles. Empirical Software Engineering, 16(6):703–732, 2011.

Leave a Reply

Your email address will not be published. Required fields are marked *