Skip navigation

Community

0

IBM’s Watson computer system recently beat two of the strongest Jeopardy players in the world in a real match of Jeopardy. The match was the culmination of four years of work by IBM researchers. This week’s question has a dual purpose – to focus discussion on how the Watson innovations can/will/might affect the techniques and tools available to embedded developers – and to solicit questions from you that I can ask the IBM research team when I meet up with them (after the main media furor dies down a bit).

The Watson computing system is the latest example of innovations in extreme processing problem spaces. The NOVA’s video “Smartest Machine on Earth” provides a nice overview of the project and the challenges that the researchers faced while getting Watson ready to compete against human players in the game Jeopardy. While Watson is able to interpret the natural language wording of Jeopardy answers and tease out appropriate responses for the questions (Jeopardy provides answers and contestants provide the questions), it was not clear from the press material or the video that Watson was performing processing of natural language in audio form or only text form. A segment near the end of the NOVA video casts doubt on whether Watson was able to work with audio inputs.

In order to bump Watson’s performance into the champion “cloud” (a distribution presented in the video of the performance of Jeopardy champions), the team had to rely on machine learning techniques so that the computing system could improve how it recognizes the many different contexts that apply to words.Throughout the video, we see that the team kept adding more pattern recognition engines (rules?) to the Watson software so that it could handle different types of Jeopardy questions. A satisfying segment in the video was when Watson was able to change its weighting engine for a Jeopardy category that it did not understand after receiving the correct answers of four questions in that category – much like a human player would refine their understanding of a category during a match.

Watson uses 2800 processors, and I estimate that the power consumption is on the order of a megawatt or more. This is not a practical energy footprint for most embedded systems, but the technologies that make up this system might be available to distributed embedded systems if they can connect to the main system. Also, consider that the human brain is a blood-cooled 10 to 100 W system – this suggests that we may be able to drastically improve the energy efficiency of a system like Watson in the coming years.

Do you think this achievement is huff and puff? Do you think it will impact the design and capabilities of embedded systems? For what technical questions would you like to hear answers from the IBM research team in a future article?

Visit Embedded Insights to see the full conversation occurring across multiple communities about this and other questions of the week.

0

I recently had an unpleasant experience related to online security issues. Somehow my account information for a large online game had been compromised. The speed in which the automated systems detected that the account had been hacked into and locked it down is a testament to how many compromised accounts this particular service provider handles on a daily basis. Likewise, the account status was restored with equally impressive turn-around time.

What impacted me the most about this experience was realizing that there is obviously at least one way that malicious entities can compromise a password protected system despite significant precautions to prevent such a thing from occurring. Keeping the account name and password secret; employing software to detect and protect again no viruses, Trojan horses, or key loggers; as well as ensuring that data between my computer and the service provider is encrypted was not enough to keep the account safe.

The service provider’s efficiency and matter-of-fact approach to handling this situation suggests there are known ways to circumvent the security measures. The service provider offers and suggests using an additional layer of security by using single-use passwords from a device they sell for a few bucks and charge nothing for shipping.

As more embedded systems support online connectivity, the opportunity for someone to break into those systems increases. The motivations for breaking into these systems are myriad. Sometimes, such as in the case of my account that was hacked, there is the opportunity for financial gain. In other cases, there is notoriety for demonstrating that a system has vulnerability. In yet other cases, there may be the desire to cause physical harm, and it is this type of motivation that begs this week’s question.

When I first started working with computers in a professional manner, I found out there were ways to damage equipment through software. The most surprising example involved making a large line printer destroy itself by sending a particular sequence of characters to the printer such that it would cause all of the carriage hammers to repeatedly strike the ribbon at the same time. By spacing the sequence of characters with blank lines, a print job could actually make a printer that weighed several hundred pounds start rocking back and forth. If the printer was permitted to continue this behavior, mechanical parts could be severely damaged.

It is theoretically possible to perform analogous types of things with industrial equipment, and with more systems connected to remote or public networks, the opportunities for such mischief are real. Set top boxes that are attached to televisions are connecting to the network – offering a path for mischief if the designers of the set top box and/or television unintentionally left an opening in the system for someone to exploit.

Is considering the security implications in an embedded design needed? Where is the line between when implementing embedded security is important versus when it is a waste of resources? Are the criteria for when embedded security is needed based on the end device or on the system that such device operates within? Who should be responsible for making that call?

Visit Embedded Insights to see the full conversation occurring across multiple communities about this and other questions of the week.

0

The Super Bowl played out this weekend and the results were quite predictable – one team won and the other lost. What was less predictable was knowing which of those teams would end up in the win column. Depending on their own set of preferences, insights, and luck, many people “knew” which team would win before the game started, but as the game started and continued toward the final play of the game, many people adjusted their prediction – even against their own wishes – as to the eventual outcome of the game.

Now that this shared experience is passed, I think it appropriate to contemplate how well we can, as individuals and as an industry, reliably predict the success of projects and technologies that we hope for and rely on when designing embedded systems. I think the exercise offers additional value in light of the escalating calls for public organizations to invest more money to accelerate the growth of the right future technologies to move the economy forward. Can we reliably predict which technologies are the correct ones to pour money into (realizing that we would also be choosing which technologies to not put research money into)? In effect, can and should we be choosing the technology winners and losers before they have proven themselves in the market?

Why does it seem that a company, product, or technology gets so much hype just before it falls? Take for example Forbes Company of the Year recipients Monsanto and Pfizer which appeared to be on top of the world when the award was given to them and then almost immediately afterwards faced a cascade of things going horribly wrong. I will only point out that competition in the smartphone market and tablet computing devices has gotten much more interesting in the past few months.

I remember seeing a very interesting television documentary on infomercials called something like “deal or no deal”. I would like to provide a link to it, but I cannot find it, so if you know what I am referring to please share. The big take away for me was one segment where a 30 year veteran in the infomercial world is asked if he knows how to pick the winners. The veteran replied that the success rate in the market is about 10% - meaning that of the products he down selects to and actually brings to market, only 10% are successful. Despite his insights into how the market responds to products, he could not reliably identify which products would be the successful ones – luck and timing still played a huge role in a product’s success.

Luck and timing are critical. Consider that the 1993 Simon predates the iPhone by 14 years and included similar features that made the iPhone stand out when it was launched including a touch screen.Mercata predates Groupon, which Google recently acquired for $2.5 billion, by almost a decade; timing differences with other structures in the market appear to have played a large role in the difference between the two company’s successes. In an almost comical tragedy, the precursor to the steam engine that was perfected by Hero (or Heron) of Alexandria and used in many temples in the ancient world, barely missed the perfect applications at Diolkos – and we had to wait another 1500 years for the steam engine to be reinvented and applied to practical rather than mystical applications.

I meet many people on both sides of the question, should we publicly fund future technologies to accelerate their adoption. My concern is that the track record of anyone reliably predicting the winners is so poor that we may be doing no better than chance – and possibly worse – when we have third party entities direct money that is not their own to projects they think may or should succeed. What do you think – can anyone reliably pick winners well enough to be trusted to do better than chance and allocate huge sums of money to arbitrary winners that still need to stand up to the test of time? What are your favorite stories of snatching failure from the jaws of victory?

Visit Embedded Insights to see the full conversation occurring across multiple communities about this and other questions of the week.

0

Compiler technology has improved over the years. So much so that the “wisdom on the street” is that using a compiled language, such as C, is the norm for the overwhelming majority of embedded code that is placed into production systems these days. I have little doubt that most of this sentiment is true, but I suspect the “last mile” challenge for compilers is far from being solved – which prevents compiled languages from completely removing the need for developers that are expert at assembly language programming.

In this case, I think the largest last mile candidate for compilers is managing and allocating memory outside of the processor’s register space. This is a critical distinction because most processors, except the very small and slower ones, do not provide a flat memory space where every memory access possible takes a single clock cycle to complete. The register file, level 1 cache, and tightly coupled memories represent the fastest memory on most processors – and those memories represent the smallest portion of the memory subsystem. The majority of a system’s memory is implemented in slower and less expensive circuits – which when used indiscriminately, can introduce latency and delays when executing program code.

The largest reason for using cache in a system is to hide as much of the latency in the memory accesses as possible so as to be able to keep the processor core from stalling. If there was no time cost for accessing anywhere in memory, there would be no need to use a cache.

I have not seen any standard mechanism in compiled languages to layout and allocate an application’s storage elements into a memory hierarchy. One problem is that such a mechanism would make the code less portable – but maybe we are reaching a point in compiler technology where that type of portability should be segmented away from code portability. Program code could consist of a portable code portion and a target-specific portion that enables a developer to tell a compiler and linker how to organize the entire memory subsystem.

A possible result of this type of separation is the appearance of many more tools that actually help developers focus on the memory architecture and find the optimum way to organize it for a specific application. Additional tools might arise that would enable developers to develop application-specific policies for managing the memory subsystem in the presence of other applications.

The production alternate at this time seems to be systems that either accept the consequences of sub-optimally automated memory allocation or to impose policies that prevent loading applications onto the system that have not been run through a certification process that makes sure each program behaves to some set of memory usage rules. Think of running Flash programs on the iPhone (I think the issue of Flash on these devices is driven more by memory issues – which affect system reliability - than by dislike of another company).

Assembly language programming seems to continue to reign supreme for time sensitive portions of code that rely on using a processor’s specialized circuits in an esoteric fashion and/or rely on an intimate knowledge of how to organize the storage of data within the target’s memory architecture to extract the optimum performance from the system from a time and/or energy perspective. Is this an accurate assessment? Is assembly language programming a dying skillset? Are you still using assembly language programming in your production systems? If so, in what capacity?

Visit Embedded Insights to see the full conversation occurring across multiple communities about this and other questions of the week.