...researching fundamentals of networking and communications


Google I/O Session Summaries:

Each heading is a link to the session's homepage with a video of the talk. All of the 80+ talk videos are online, along with a PDF of the presenter's slides. The session pages can be found at the session index.

Table of Contents

How do I code thee? Let Me Count the Ways:

In his talk, Dan Morrill outlined the three modalities available to the android, and the strengths and weaknesses of each. The three modalities are native code (C), Ajax (HTML/JavaScript), and managed code (Java). He took a generic problem, in this case a K-means algorithm, and implemented it in all three models. Briefly, the K-means algorithm groups a random assortment of objects (points) into groups based on proximity. In other words, the idea is to divide a set evenly based on the relative location of each object to a common point or "centroid." It is a greedy algorithm which means it will run until it has made no new changes or until the user terminates it.

Dan explained his rationale by saying that he wanted to illustrate how to go about coding an app which requires heavy computation on-device. In a way the idea of "how to code" such applications is a problem in distributed computing since 3G mobile devices are intended to be fully integrated with the web, and therefore a network of computing resources. The K-Means algorithm is a good one because while it is an efficient solution to its particular problem, it is not an "Easy" computation and, especially on a low-power platform, will demonstrate a significant difference in computation time over different implementations, though the algorithm itself does not change.

Dan first demonstrated the implementation of the algorithm in Java, running in the Dalvik virtual machine. This is a good point to point out that the Android VM is not the standard JRE for desktop computers. It is highly optimized for mobile devices, and chief among these optimizations is how garbage collection (GC) is handled. The primary reason for this is that all apps (or processes... this detail was unclear) are allowed only 16 MB of memory space. The developers at the conference stated that they do not plan to change this system-wide policy any time soon and as such most applications must be written to be as efficient as possible. Dalvik helps you in this regard by invoking garbage collection often, which will be discussed later.

Java is the primary development environment for the Andriod platform, and for the most part should be the only environment needed. All phone hardware and software libraries are available to Dalvik and the Android SDK (free as in beer, and as in speech). However, because of the various optimizations and tweaks, some tasks, such as the clustering algorithm that Dan chose to demonstrate, benefit from some more creative coding. At this point it would be helpful to see the performance breakdown, before discussing the differences of coding methods (the following is straight from Dan's presentation):

ModalityTotalRendering TimePercent Rendering

As the data illustrates, Dalvik is a clear middle ground between the two other modalities. Rendering to the screen will have a bottom-line computation time, because even if you wanted to implement it, a native draw method would not have direct access to the phone hardware, as will be discussed later. Though the Dalvik clustering algorithm performed significantly slower than the native algorithm, it is by no means a "bad" thing to see. The algorithm was specifically chosen as a "computation-heavy" task, which is exactly what is necessary to demonstrate the performance difference. Most apps will be bottle-necked by data transfers, waiting on GPS coordinates, or probably waiting on user interaction before developers need to worry about a ~500 ms speed increase. To put it concisely: in unlikely circumstance that Java code performs too slowly, implement the power-hungry function in C.

Native integration on the Android works very similarly to native integration in Java. In fact, native Android code currently uses the JNI (Java Native Interface) to hook native code, and the only real difference is the fact that the native library must be compiled for the ARM architecture rather than the architecture of the development platform. In fact, when Dan presented the K-means project, he ran all three implementations right from the same Eclipse project, making very few changes to the main Activity classes to switch implementations. The only function in the C code file was the "clusterer" function which takes in the array of points to be labeled and labels them. The draw functionality, the thread handlers, and everything else was still implemented in Java.

Dan mentioned a brief note at this point. He first wrote the clusterer function in Java and used a HashMap data structure (or some other dynamically allocated data structure) to hold all of the points so as to be flexible with the number of points considered, but had to use a static array in C. Out of curiosity, he went back to Java and used arrays and found that there was actually a performance difference. Such a difference would be present on any platform (since the algorithm does lots of random-access) but would be negligible on a desktop platform because of their raw speed. The Dalvik framework doesn't handle these data structures any differently, but because of the difference in processor speed (the android's ARM processor operates at around 500 Mhz) efficiency is paramount. Choices, such as which data structure to use, may need to be re-thought when porting seemingly trivial algorithms to a mobile platform simply because of the huge performance disparity.

Finally, there is Ajax code. The benefit of using Ajax may not be immediately apparent. First of all, Ajax will run in any browser. Advanced applications like Facebook, Google Maps, etc. (basically all Google products) integrate JavaScript functionality which is not supported by all browsers yet. The <canvas> tag which is an integral part of the future HTML 5 standard and widely used today is not explicitly supported by the Android browser, but standard JavaScript functions (namely those dealing directly with HTML 4 elements and strings) will be just fine as long as JavaScript is enabled in the browser. The point here is that if you want your application to be as widely used as possible, a JavaScript hook will be the easiest way to do it. JavaScript also has the added advantage of having a (presumably) powerful server (or cluster of servers) to communicate with and give jobs to. Dan wanted to demonstrate essentially "what not to do" so he also implemented the clustering function in JavaScript, but a smarter way to do it would be to have JavaScript render the page but tell the server to run the algorithm, possibly render the image, and send it to the phone where JavaScript and a WebView would display it.

Like native coding, Ajax is very easy to integrate. The framework includes a WebView function, which can be placed anywhere in the GUI layout and will act as essentially a "mini-browser" capable of rendering any HTML page that the phone browser can. The K-Means JavaScript app linked the main Activity's WebView (which took up the whole screen) to a page (bundled in the .apk file for the app) which ran the entire process in Ajax. The page also works in any browser on a PC, and presumably on the iPhone (though that may require a few hoops to jump through). As is clear from the table above, this method is far inferior to Java and C, but is a useful benchmark to consider in order to decide how best to deploy your application.

Dan concluded his talk to re-iterate the point that there are 3 development channels for a reason. The best apps will use a mash-up of these coding practices in order to most efficiently, seamlessly, provide the functionality they are designed to provide. Each modality has its benefits and drawbacks, and is suited for different needs.

Pixel Perfect Code: How to Marry Interaction and Visual Design the Android Way:

Chris Nesladek's talk focused on the increased importance of visual design on a mobile platform. While desktop UI's are fairly mature and most people can appreciate the "right way" to lay out an application, the mobile platform has many pitfalls. Chief among his advice was to take care of as much decision-making as possible, while still granting the user control. For example, he reminded everyone of the existence of "toast," which is a view created for "quick little messages" for the user. Upon completion of a task, rather than a dialog with an "OK" button, a toast message can be used to notify the user; preferably, however, if the notification isn't essential, it is best to leave it out entirely.

More generally, his talk focused on a flow for UI design. Structure should support behavior which should aid expression. By expression he is referring to the pure "design" of the UI, which especially in a competitive marketplace environment can make or break an app. Unfortunately even if an app is the first app or even holds the spot of being the only app to provide a certain functionality, an ugly or frustrating UI can spell its doom. Users have come to associate elegance with performance, and unless the core functionality is extremely compelling, an app will be judged (and rated) primarily by its "user experience."

Back to the specifics, here are some suggestions: Take advantage of the multi-tasking ability of the Android. Since you can run background processes, heavy computation can be spun off to another thread, leaving the UI to be quick and responsive. TraceView will help with this issue (more below). Menus should be ordered left to right by importance. This will help develop muscle-memory among users and contribute to a better overall experience. All items can also have a long-press action, equivalent to a right-click which traditionally will bring up a menu. Minimize on-screen actions and keep your hierarchy (views within views) as flat as possible. This way your layouts will fail gracefully and are less likely to "break" when changes are made. Use "lazy load" in your startup, etc. so that the UI will show up possibly before its underlying framework has been established. "spinners" are the android equivalent of drop-down menus.

Finally, a nice example he showed was the login screen of a particular app. When the incorrect username and password were entered, the fields would shake back and forth and clear the data. This is much more helpful and fun than an error message and is delightful to see as a user. Chris strongly endorsed such creativity as a good example of a dynamic and unobtrusive solution to a problem.

Coding for Life - Battery Life, That Is:

In his summarry, Jeffery Sharkey says "the three most important considerations for mobile applications are, in order: battery life, battery life, and battery life." This is, of course, redundand and an oversimplification of things, but he has a very compelling point. As with all resources on a mobile device, power is limited. The batteries in the HTC Dream (G1) and HTC Magic (G2) are 1150 mAh and 1350 mAh respectively. A new Samsung model will have a 1500mAh battery, but generally these are the numbers we have to work with. Briefly, the LCD screen at optimal brightness (about 50%) uses 90mA (in response to a later question Jeff pointed out that at 0% brightness the screen took about 70mA, so the majority of the power is used to drive the LCD). The CPU at 100% usage draws about 110 mA, and the accelerometer, in it's "game" mode draws 80mA. He also gave some numbers for the maximum times for certain activities:

  • Watching YouTube: 340mA = 3.4 hours
  • Browsing 3G web: 225mA = 4 hours
  • Typical usage: 42mA = 32 hours
  • EDGE completely idle: 5mA = 9.5 days
  • Airplane mode idle: 2ma = 24 days

For clarification, EDGE is the sub-3G wireless internet standard. To consider the problem at a level above the hard numbers, Jeff pointed out the major practical problems of power conservation. First on the list was waking up while idle for background tasks. Waking up the phone utilizes a lot of CPU power and usually also turns on the data radio. Furthermore, wake-up scheduling can be a big problem. If hypothetically 3 different applications wanted to wake up the phone to retrieve e-mail or SMS messages, check stock quotes, etc. there can be a timing issue. If all three work on a 30 minute interval, but are queued 10 minutes apart what was first an efficient 30 minutes of sleep and a few seconds to a minute or so of work has turned into only 10 minutes of sleep and a lot of wasted overhead of waking up the phone. The solution is to use "bundled alarms" so that activities will synchronize their wake-up routines and do all important work together at the desired 30 minute intervals.

Another power hog is data transfer. For performance sake, most mobile applications will try to keep their data transfer to quick, short bursts, but sometimes it is neccesary to transmit a lot of data. Counter-intuitively, the faster, more power-hungry protocols are ideal. Take the following power data for transferring about 50 MB of data:

  • EDGE (90kbps) 300mA * 9.1 min = 45 mAh
  • 3G (300kbps): 210 mA * 2.7 min = 9.5 mAh
  • WiFi(1mbps): 330 ma * 48 sec = 4.4 mAh

EDGE is clearly the lesser standard, drawing more power than 3G for a lower datarate, but one might not think at first that wifi, which draws 120mA more than 3G would be the most efficient means for bulk data transfer. Not only is the experience immediately more responsive for the user, but also the transfer ends up using less than half the power. Other power losses as far as network usage is concerned come from the overhead involved with network requests. If immediate feedback is not neccesary it is more efficient to bundle data transfers into a single request, rather than sending lots of small data more frequently. While there is not much control over request frequency when receiving data, output can be managed so that it uses radio time efficiently. A third communication power concern is location detection. The GPS draws less power than the wireless radios, but can take a very long time to get a location fix. Often (without view of clear sky) the GPS fails entirely after wasting quite a bit of battery life. The data below illustrates this point:

  • GPS: 25 seconds * 140 mA = 1mAh
  • Network: 2 seconds * 180mA = 0.1 mAh

The "network" alternative detection involves triangulating position based on the signal strength of various service towers. The resulting data is much less precise than GPS location, but, as you can see, uses as little as 10% of the power. If we are only interested in a person's zip code, network location is faster and less battery-intensive.

As methods for improvement, Jeff suggested the following: Use Gzip to compress incoming/outgoing data, especially text data, to reduce the transfer time. As well, it is important to choose the most efficient parser for your data. Though there may be extra processing involved, it ends up being "cheaper" overall for transmitting information. Gzip is also used for standard desktop data transmission. Savvy web developers can enable an Apache flag to compress all outgoing data which is automatically decoded by all modern browsers. Such compression can cut load time in half (excluding images and video, of course). Most services on the Android platform can be enabled and disabled by applications, depending on permissions, and much of the hardware is designed with battery life in mind. The CPU auto-scales as aggressively as possible while maintining performance, and the accelerometer has four different activity modes, each providing higher sample rates at the cost of power consumption. Also, by closing services when you are done with them and most importantly allowing the phone to sleep as soon as a task is completed small amounts of savings can add up, allowing the user to continue to use the phone.

Mastering the Android Media Framework:

David Sparks gave a talk about the Android media framework. The framework relies on the openCORE system for developing multimedia applications. The default supported codecs are Vorbis, MP3, AAC, H.263, MPEG-4SP, and H.264. Midi support is also available. Vorbis is the preferred audio codec for the andriod as it produces "good" sound with much less processing and memory cost than MP3 and AAC. Also the framework is capable of looping Vorbis audio seamlessly, making it ideal for game soundtracks, ambient noise, etc. David gave a lot of information on the codec specifics, but such information can be better found elsewhere. As this information relates to mobile development, he offered a few suggested formatting choices:

  • Authoring for android:
    • MP4 with H.264 video (up to 500kbps)
    • AAC 96kbps
  • Creating content on Android for other androids:
    • 3GPP container with H.263 (up to 384kpbs and AMR-NB audio)
  • Content for other devices:
    • QCIF @ 192kbps, H.263 video, 3GPP, AMR-NB

From above, QCIF stands for Quarter Common Intermediate Format, specifying a resolution of 176 x 144. The 3GPP container is a data header for standardized video on mobile platforms. AMR-NB stands for Adaptive Multi-Rate Narrow Band, which is a codec specified by the 3GPP group optimized for speech.

Aside from compression, the other concern for the media framework was the sound mixer. The AudioTrack and AudioRecord classes are used to handle raw PCM audio streams (such as mic input) and send them to the mixer engine, controlled by the AudioManager class. The volume control on the phone only applies to the current audio output, so for sporadic noises, custom controls must be implemented. For this you can setup control streams to the audio manager to change various volume levels.

Writing Real-Time Games for the Andriod:

Chris Pruett began his talk by saying "this is basically the opposite of the battery life talk." While he meant it to be for laughs, he wasn't kidding. Games can and should be the most dynamic, visually rich, and CPU-intensive apps available for any hardware platform. On more powerful PCs realizing hardware potential is all about pushing the most polygons through the video interface and tracking as many collisions as possible on the CPU for physics calculations. This envelope is pushed by simply adding "more stuff" to the experience. On a mobile platform, however, it is very easy to run into the upper boundary on hardware capability without even intending to. Such a benchmark was demonstrated by Chris during his talk. He wrote an app to try to draw as many android characters as he could on the screen. He stopped at 1000 sprites which took between 90 and 370 ms to draw (to be explained later). This kind of test might slow down a Sega Genesis (circa 1988) but for most gaming hardware for the last 15-20 years such performance is a joke.

That said, games for a mobile platform shouldn't be about graphics anyway if they are to be viewed on a 3" screen with a resolution below that of standard definition TVs. Almost all other games these days are meant to be viewed on High-Definition screens; in fact one of the first Xbox360 games displayed essential on-screen text in a font too small to be read on SDTVs (an oversight later acknowledged by developers). That said, Chris began his talk pointing out why it is advantageous to develop games for mobile platforms. He quoted a statistic that at the time of his talk 79% of iPhone users had downloaded one game, many of which were paid for. Because of the "more stuff" mentality in console and PC games they are expensive and time-consuming to make, requiring large teams of artists and developers working for years to put out one title. A mobile game, however, like the demo that Chris showed during his talk, can take one person about a month to complete, and it can be just as fun as a big-budget industry title as long as it is cleverly designed and polished.

From there Chris delved into the technical idiosyncrasies of Android game development. His engine for a 2D side-scrolling platformer (think Super Mario Bros.) consists of a "game graph" which is traversed at each frame, prioritizing objects by their proximity to the current screen location. He stressed at this point "framerate-independent motion." In other words, he warned developers to make sure to update the screen based on a real-time frame rate, not just "when I'm done traversing the graph." The reasons for this are broad and varied, but they are all motivated by keeping the game smooth and responsive to avoid frustrating the user. There is nothing wrong than frame lag during an exciting or hazardous part of a level.

Chris recommends that as you develop the game to maintain as much flexibility for new classes, interfaces, features, etc. as possible. Rather than trying to optimize and tweak performance as you go, it is better to make such modifications only as needed in order to speed the development process. Performance optimization is for polishing once the project is done.

Though tweaking should be left until the end, there are various macroscopic performance concerns to keep in mind throughout development. Chief among the processes which greatly hurt performance is Garbage Collection. As a frame of reference (no pun intended), most digital video runs at a minimum of 30 frames per second, this means that each frame must be rendered in 33.3 ms or less. Chris measured the average time for garbage collection to be between 100-300 ms, which for most apps is a perfectly reasonable delay, but any process which will take as much time as 10 frames to complete will seriously slow the game and frustrate the user. In order to avoid invoking the GC, the game must essentially work with totally static memory. You must avoid allocating memory, and if you do allocate it, you can't release it until a time when the game no longer has to render in real-time (i.e. between levels). Collections and iterators (like vectors, linkedlists, etc.) also allocate and release memory constantly, thus invoking the GC. Any bult-in array sorting functions, enum types (not sure why these allocate memory), and any function that returns a read-only string (like class.getX()) will also invoke the GC eventually.

Though OpenGL is of course neccesary for 3D graphics, calls to native libraries are also expensive. Chris benchmarked the times for calls through interfaces and to JNI libraries to be 30% slower than regular functions. He suggests using static functions whenever possible. As well, since the CPU has no floating-point unit, any floating-point data type should be avoided. As well, he suggests using the final keyword whenever possible, and using local variables whenever possible.

Debugging is also different for games. The log.d() function is quicker than System.out.print() and should be used for logging. As well, the traceview tool is very useful for games. To use traceview effectively, Chris recommends that you add a profiling menu item in order to turn tracing on and off for different parts of the game to avoid large log files and confusing output.

As inspiration for his sprite benchmark, Chris explored various methods for drawing graphics to the screen. There are two main methods for drawing to the screen: the Canvas and OpenGL. The canvas utilizes the CPU only and is good for quick and easy 2D drawing. It is visibly slow when drawing more than about 10 sprites, but is very easy to set up and use. OpenGL is a hardware-accelerated graphics library for 2D and 3D drawing. It is capable of rendering much more complex scenes than the Canvas. Chris did not go into the Android's 3D capabilities, but discussed 3 different OpenGL methods for drawing 2D images: Quads_ortho, VBO_quads, and draw_texture. In his presentation he displayed the benchmark data for the 4 draw methods (3 OGL and Canvas). He found draw_texture to be most efficient for his purposes, but given different media the other methods might be optimal. Canvas was more or less equivalent to the OpenGL methods for 10-100 sprites (draw_texture was able to draw 100 sprites without any lag whatsoever at 30 ms/frame, while canvas took about 60 ms/frame) but at 1000 sprites canvas took 4 times as long as draw_texture. Finally in order to further optimize render performance, Chris suggested using an Atlas texture which is a monolithic image that contains all of the tiles for an object (i.e. single image for all animation frames for a sprite, single image with all generic map tiles) and ATITC texture compression.

Some final notes were to add a sleep call within the onTouchEvent method if using touch screen input as the events are sent in very rapid succession and can affect game performance. He warned about using only direct buffers, as there is a bug in the framework that will allow VBO to incorrectly use an indirect buffer but will not report an error (I don't actually know what this means but I jotted it down to make sense of it later). All of the above rendering methods are used within the GLSurfaceView, which is the view needed for drawing hardware-accelerated graphics.

To conclude, stepping back from the technical details, Chris emphasized "highly competent" game development. Throughout the talk he kept reminding us that there is no sense in making a game if it isn't fun, and fun should always be the top priority. Some specific design elements to look out for are to avoid relying on the hardware keyboard and trackball, as future phones may not support them (the new Samsung model doesn't have either). Also, it is important to keep the final deployment size small, ideally in the range of 2-3 MB. Large games will be the first thing to go when a user needs to clear up space, and odds are the extra data content will only slow down the game. He also urged us to come up with clever ways to utilize the always-on internet, as it encourages innovation and increasingly dynamic experiences. Finally his advice was "polish, polish, polish." The difference between a great game and a flop is in the user's experience while playing it. If the game is unfair, finicky, ugly, or slow, it will get bad ratings on the marketplace and will never be played.

Debugging arts of the ninja masters:

Justin Mattson gave a talk on debugging Android apps. Debugging for all platforms requires, above all else, an organized method for determining the source of a problem. An even harder task is finding the problems you don't even know you have yet, but thankfully the Android SDK comes with a few tools to help out.

The first tool Justin explained is also the tool to start out with: logcat. The Eclipse plug-in includes a logcat view in the debug perspective, and it is akin to the Java VM console. It outputs all messages, debug or otherwise, sent to the development environment from the device, including screen orientation changes, errors, warnings, and any information the developers deemed useful. Each message is given a severity level among the following, ordered by importance: Error, Warning, Info, Debug, and Verbose. Filtering logcat is very important, as the amount of information is too great to deal with all at once. Depending on the bug, different filter levels are needed. If an app crashes at boot, only the Error level messages are important, which will give a stack trace just like a C++ or Java debugger will provide to point out the root of whatever error caused the application to crash. For more subtle problems, the Debug filter is best.

Moving on from logcat, Google provides a program which is invaluable for mobile development: traceview. In my talk I will give a demonstration of how this tool works, but it is a very robust (by debug tool standards) program for viewing the complete stack trace of a program between the times when startTrace() and stopTrace() were called. To utilize traceview, the developer must include the start and stop methods in his program around the regions that he wants to trace. Tracing the entire program is impractical because tracing in and of itself degrades performance and the amount of data being collected is too large to be allowed to just "let run." Once the program has been run and the start-stop region has been passed, the trace file can be pulled off of the device using adb (the bridge service on the development machine) and opened in traceview. The developer is greeted by a time-line at the top of the window broken up by thread showing a color-coded graph of which function was being called at the time, to the precision of .001 ms. Such precision is the reason for the large file sizes. As a frame of reference, a 7.8 second trace I ran on an app I was working on recorded 723 different function calls into a ~1.2 MB file.

Using traceview, a developer can determine very quickly where performance suffers by looking for large same-color regions of the graph (functions are color-coded to avoid similar consecutive colors). For example, Justin showed a screenshot he captured while working on the Google Finance app. The app was loading very slowly, and during a trace of the load he noticed that it was spending about two seconds on a getTimeZone function. This turned out to be because the hardware which determines the user's location, be it by network or by GPS takes a while to get a fix, and the main activity was waiting for its output. In order to remedy the problem, he put the call in a low-priority thread since the time Zone was only necessary to display the "last updated" time, which is not immediately important.

The third tool Justin showed was Hierarchy Viewer. This tool allows the developer to visually inspect the nodes of his GUI layout. Since GUIs must be made with code, too, there can be many approaches to what is ultimately the same configuration. By viewing the layout in terms of its hierarchy, optimizations can be made to "flatten" the layout. Rather than nesting visual objects withing 5 tiers of layout rules, if a solution involved all elements organized by a single parent, it will be faster and more stable.

Using the tools that the SDK offers, Android developers can be extremely thorough and efficient in fixing bugs and optimizing their applications. Beyond just fixing errors and preventing crashes, the debug process should be able to take Occam's Razor to an application and keep its activities precise and its interface responsive with the user's convenience in mind.

Fun Hacks and Cool JavaScript:

Coming Next Week

r2 - 2009-06-12 - 02:07:08 - AriTrachtenberg

Laboratory of Networking and Information Systems
Photonics Building, Room 413
8 St Mary's Street,
Boston MA 02215

Initial web site created by Sachin Agarwal (ska@alum.bu.edu), Modified by Weiyao Xiao (weiyao@alum.bu.edu), Moved to TWiki backend by Ari Trachtenberg (trachten@bu.edu). Managed by Jiaxi Jin (jin@bu.edu).
Syndicate this site RSSATOM