Software Manager in System Optimization Technology Center (SOTC), Intel Corporation. Kerry has been working with Intel for eight years, four years in open source mobile OS software stack on IA, which includes Android optimization and MeeGo SDK. Before joining Intel, Kerry worked in Motorola on mobile platform and 2G wireless base-station software developments. Kerry holds a master degree in electronics engineering.
Architect of System Software Optimization Center, Intel Corporation. Xiao-Feng has been working with Intel for 12 years, with extensive technical experience in parallel system, compiler design and runtime technologies, where he has authored about 20 academic papers and 10 U.S. patents. Two years ago, Xiao-Feng initiated the evaluation and optimization efforts for best Android user experience on Intel platforms. Before he joined Intel, Xiao-Feng was a technical manager in Nokia Research Center. Xiao-Feng holds a PhD degree in computer science, and is a member of Apache Software Foundation. His personal homepage can be found at http://people.apache.org/~xli.
Engineering manager of Open Source Technology Center, Intel Corporation. Bingwei has been working in Intel for 11 years, with abundant experience in Linux OS, open source software and system engineering. His working scope spans from enterprise to client platforms and now is focused on Mobile OS. Bingwei holds a master degree of computer science.
Senior software engineer from Intel’s Open Source Technology Center. He has been with Intel for 7 years working on a variety of projects, including virtualization, manageability, OSV enabling, etc. Most recently Yong has been working on power management for a wide range of mobile operating systems such as Moblin, Meego, Tizen and Android. Yong graduated from Beijing University of Aeronautics and Astronautics and holds a master degree of computer science. He enjoys sports and reading in his spare time.
Engineering Manager in Intel's Open Source Technology Center leading a team on Mobile OS and HTML5 tools development. Before that, as a research engineer, Jackie was focused on wireless network and energy efficient communications. Prior to joining Intel in 2004, Jackie was in Chinese Academy of Sciences developing embedded operating system and smartphone products. Jackie received his M.S. and B.S. in Engineering Mechanics from Beijing University of Aeronautics and Astronautics in 2002 and 1999 respectively. He has two US patent applications.
Mobile OS Architecture Trends - Part I Published: August 28, 2013 • Service Technology Magazine Issue LXXV PDF
The world is flat, because it becomes increasingly mobile, fast, connected, and secure. People expect to move around easily with their mobile devices, keeping close communications with their partners and family, enjoying the versatile usage models and infinite contents, and without worrying about the device and data management. These all put requirements on the mobile devices, of which the mobile OS is the soul. Based on our years of experience in mobile OS design and an extensive survey of current industry situation, we believe there are several commonalities in future mobile OS architecture, such as user experience, power management, security design, cloud support, and openness design. We develop an analysis model to guide our investigation. In this article, we describe our investigations in the trends of mobile OS architecture over the next decade by focusing on the major commonalities. Based on the findings, we also review the characteristics of today's mobile operating systems from the perspective of architecture trends.
Mobile OS design has experienced a three-phase evolution: from the PC- based OS to an embedded OS to the current smartphone-oriented OS in the past decade. Throughout the process, mobile OS architecture has gone from complex to simple to something in-between. The evolution process is naturally driven by the technology advancements in hardware, software, and the Internet:
The usage model of past mobile devices is limited. A user mostly runs the device applications for data management and local gaming, only occasionally browses Internet static Web pages or accesses specific services like e-mail. In other words, the possible usages of the device are predefined with the preinstalled applications when the user purchases it. This is largely changed in new mobile devices, where the device is virtually a portal to various usage models. All the involved parties such as service providers, application developers, and other device users continuously contribute and interact through the device with its owner. Figure 1 shows the high-level usage model difference between the past and new mobile devices.
Figure 1 - high-level usage models of mobile devices (Source: Intel Corporation, 2012)
The representatives of current mobile operating systems include Apple's iOS* 5.0, Google Android* 4.0, Microsoft Windows* Phone 7.0, and a few others. In terms of their usage models, they share more similarities than differences:
The similarities of the current mobile operating systems reflect the advancement trend in hardware, software, and the Internet. Anticipating the trend of mobile operating systems, we believe those areas are the major focuses of the next generation of mobile OS design, including user experience, battery life, cloud readiness, security, and openness. They are actually conflicting targets to a large extent:
In this article, we try to investigate the various aspects of mobile OS design and present our opinions about the future of mobile OS architecture.
The article is arranged as follows. Based on the framework laid out in this section, we use separate sections to discuss the respective topics in text below. They are arranged in user experience, power management, security, openness, and cloud readiness. Finally we have discussions and a summary.
User Experience (UX)
Traditional performance is inadequate to characterize modern client devices. Performance is more about the steady execution state of the software stack and is usually reported with a final score of the total throughput in the processor or other subsystems. User experience is more about the dynamic state transitions of the system triggered by user inputs. The quality of the user experience is determined by such things as the user perceivable responsiveness, smoothness, coherence, and accuracy. Traditional performance could measure every link of the chain of the user interaction, while it does not evaluate the full chain of the user interaction as a whole. Thus the traditional performance optimization methodology cannot simply apply to the user experience optimization. It is time to invest in device user interaction optimizations so as to bring the end user a pleasant experience.
User Interactions with Mobile Devices
In a recent performance measurement with a few market Android devices, we found there was a device X behaving uniformly worse than another device Y with common benchmarks in graphics, media, and browsing. But the user perceivable experience with the device X was better than device Y. The root reason we identified was that traditional benchmarks or benchmarks designed in traditional ways did not really characterize user interactions, but measured the computing capability (such as executed instructions) or the throughput (such as processed disk reads) of the system and the subsystems.
Take video evaluation as an example. Traditional benchmarks only measure video playback performance with some metrics like FPS (frames-per-second), or frame drop rate. This methodology has at least two problems in evaluating user experience. The first problem is that video playback is only part of the user interactions in playing video. A typical life cycle of user interaction usually includes at least the following links: "launch player" > "start playing" > "seek progress" > "video playback" > "back to home screen." Yet good performance in video playback cannot characterize the real user experience in playing video. User interaction evaluation is a superset of traditional performance evaluation.
The other problem is, using FPS as the key metric to evaluate the smoothness of the user interactions cannot always reflect good user experience. For example, when we flung a picture in the Gallery3D application, the device Y had obvious stuttering during the picture scrolling, but the FPS value of device Y was higher than that of device X. In order to quantify the difference of the two devices, we collected the data of every frame during a picture fling operation in the Gallery3D application on both device X and device Y, as shown in Figure 2 and Figure 3 respectively. Every frame's data is given in a vertical bar, where the x-axis is the time when the frame is drawn, and the height of the bar is the time it takes the system to draw the frame. From the figures, we can see that device X obviously has a lower FPS value than device Y, but with smaller maximal frame time, less frames longer than 30 ms, and smaller frame time variance. This means that, to characterize the user experience of the picture fling operation, those metrics like maximal frame time and frame time variance should also be considered.
Figure 2 - Frame times of a fling operation in Gallery3D application on device X (Source: Intel Corporation, 2011)
Figure 3 - Frame times of a fling operation in Gallery3D application on device Y (Source: Intel Corporation, 2011)
As a comparison, Figure 4 shows the frame data of a fling operation after we optimized the device Y. Apparently all the metrics have been improved and the frame time distribution became much more uniform.
User experience is more about dynamic state transitions of the system triggered by user inputs. A good user experience is achieved with things such as user perceivable responsiveness, smoothness, coherence, and accuracy. Traditional performance could measure every link of the chain of the user interaction without evaluating the full chain of the user interaction as a whole.
Figure 4 - Frame times of a fling operation in Gallery3D application on device Y after optimization (Source: Intel Corporation, 2011)
Another important note is that user experience is a subjective process; just consider the experience when watching a movie or appreciating music. Current academic research uses various methodologies such as eyeball tracking, heartbeat monitoring, or just polling to understand user experience. For our software engineering purpose, in order to analyze and optimize the user interactions systematically, we categorize the interaction scenarios into four kinds:
Among them, "inputs to the device" and "control of the device" are related to the user experience aspect of how a user controls a device. "Device response to the inputs" and "system state transition" are related to the aspect of how the device reacts to the user. We can map a user interaction life cycle into scenarios that fall into the categories above; then for each scenario, we can identify the key metrics in the software stack to measure and optimize.
User Interaction Optimizations
As we have described in last subsection, there is no clear-cut and objective way to measure the user experience. We set up following criteria in our measurement of the user interactions:
Guided by the measurement criteria, we focus on the following complementary aspects of the user experience, how a user controls a device and how a device reacts to a user. How a user controls a device has mainly two measurement areas:
How a device reacts to a user also has two measurement areas:
For these four measurement areas, once we identify a concrete metric to use, we need to understand how this metric is related to a "good" user experience. Since user experience is a subjective topic that highly depends on human being's physiological status and personal taste, there is not always scientific conclusion about what value of a metric constitutes a "good" user experience. For those cases, we just adopt the industry experience values. The Table 1 gives some examples of the industry experience values.
Table 1 - example Industry experience Values for user experience (Source: Intel Corporation, 2011)
Due to human nature, there are two notes for software engineers to pay attention to the user experience optimizations.
The value of a metric usually has a range for "good" user experience. A "better" value than the range does not necessarily bring "better" user experience. Anything beyond the range limit could be invisible to the user.
The values here are only rough guidelines for common cases with typical people. For example, a seasoned game player may not be satisfied with the 120 fps animation. On the other hand, a well-designed cartoon may bring perfect smoothness with 20 fps animation.
Now we can set up our methodology for user experience optimization. It can be summarized into following steps.
Step 1: Receive the user experience sightings from the users or identify the interaction issues with manual operations. This can be assisted by high-speed camera or other available techniques.
Step 2: Define the software stack scenarios and metrics that transform a user experience issue into a software symptom.
Step 3: Develop a software workload to reproduce the issue in a measureable and repeatable way. The workload reports the metric values that reflect the user experience issue.
Step 4: Use the workload and related tools to analyze and optimize the software stack. The workload also verifies the optimization.
Step 5: Get feedback from the users and try more applications with the optimization to confirm the user experience improvement.
Based on this methodology, we have developed an Android Workload Suite (AWS)[REF-33] that includes almost all the typical use cases of an Android device. We have also developed a toolkit called UXtune[REF-34] that assists user interaction analysis in the software stack. Different from the traditional performance tuning tools, UXtune correlates the user-visible events and the system low- level events. As the next step, we are porting the work from Android to other systems.
Mobile OS Design for User Experience
Based on our experience with Android, we found the UX optimization is somehow similar to parallel application optimization, only with more complexities, for the following four reasons:
One critical lesson learned from our experience is to understand the critical path of certain operations. Different from parallel application tuning, mobile OS design does not always have strict explicit synchronization between the involved hardware components and software threads. For example:
The major point regarding power is that, with user experience, faster is not necessarily better—contrary to traditional performance optimizations. When the system already reaches the user-perceivable best responsiveness, next step optimization is the slower the better within the best perceivable range. For example, if a game has 60 fps without problem, then the mobile OS should try to get both the CPU utilization and CPU frequency as low as possible. We have to always distinguish the two cases (faster-better and slower-better) clearly. The optimization techniques for the two cases can be quite different.
When multiple cores and GPUs are introduced, the two cases above become more obvious. More cores help "faster-better" scenarios usually, but hurt battery life for "slower-better" scenarios. A mobile OS has to turn on and off the cores with a smart policy, because the on/off latency could be longer than a touch operation (usually in a range of 100 ms to 500 ms).
For parallel application performance tuning, people found "execution replay" useful in debugging. It is usually one multithreaded application reply, that is, within one process. For UX, the interactions cross processes through IPC, and between CPU, GPU, and display controller (DC), are a whole-system-wide collaboration. The traditional replay technique cannot help.
Copyright © 2012 Intel Corporation. All rights reserved.