Skip navigation

Community

1 2 3 ... 5 Previous Next

Embedded Insights Question of the Week

75 Posts tagged with the question_of_the_week tag
0

The FAA (Federal Aviation Administration) has given permission to pilots on some airlines to use iPads in the cockpit in place of paper charts and manuals. In order to gain this permission, those airlines had to demonstrate that the tablet did not emit radio waves that could interfere with aircraft electronics. In this case, the airlines only had to certify the types of devices the pilots wanted to use rather than any of the devices that passengers would want to use. This type of requirements-easing is a first step toward possibly allowing electronics use during landings and takeoffs for passengers.

Devices, such as the Amazon Kindle, that use an E-ink display, can emit radio emissions that are less than 0.00003 volts per meter (according to tests conducted at EMT Labs) when in use – well under the 100 volts per meter of electrical interference that the FAA requires of airplanes. Even if every passenger was using a device with emissions this low, it would not exceed the minimum shielding requirements for aircraft.

A challenge though is whether allowing some devices versus others to operate throughout a flight would create situations where enough passengers might accidently leave on or operate their unapproved devices so that taken together – all of the devices might exceed the safety constraints for radio emissions.

On the other hand, being able to operate an electronic device throughout a flight would be a huge selling point for many people – and this could lead to further economic incentives for product designers to push their designs to those limits that could gain permission from the FAA.

Is the talk about easing the restrictions for using electronic gadgets during all phases of a flight wishful thinking or is the technology advancing far enough to be able to offer devices that can operate well below the safety limits for unrestricted use on aircrafts? I suspect this on-going dialogue between the FAA, airlines, and electronics manufacturers could yield some worthwhile ideas on how to ensure proper certification, operation, and failsafe functions within an aircraft environment that could make unrestricted use of electronic gadgets a real possibility in the near future. What do you think will help this idea along, and what challenges need to be resolved before it can become a reality?

Visit Embedded Insights to see the full conversation occurring across multiple communities about this and other questions of the week.

 

 


0

Managing the heat emanating from electronic devices has always been a challenge and design constraint. Mobile devices present an interesting set of design challenges because unlike a server operating in a strictly climate controlled room, users want to operate their mobile devices across a wider range of environments. Mobile devices place additional design burdens on developers because the size and form factor of the devices restrict the options for managing the heat generated while the device is operating.

The new iPad offers the latest device where technical specifications may or may not be compatible with what users expect from their devices. According to Consumer Reports, the new iPad can reach operating temperatures that are up to 13 degrees higher (when plugged in) than an iPad 2 performing the same tasks under the same operating conditions. Using a thermal imaging camera, the peak temperature reported by Consumer Reports is 116 degrees Fahrenheit on the front and rear of the new iPad while playing Infinity Blade II. The peak heat spot was near one corner of the device (Image at the referenced article).

This type of peak temperature is perceived as warm to very warm to the touch for short periods of time. However, for some people, they may consider a peak temperature of 116 degrees Fahrenheit to be too warm for a device that they plan to hold in their hands or on their lap for extended periods of time.

There are probably many engineering trade-offs that were considered in the final design of the new iPad. The feasible options for heat sinks or distributing heat away from the device were probably constrained by the iPad’s thin form factor, dense internal components, and larger battery requirements. Integrating a higher pixel density display definitely provided a design constraint on how the system and graphic processing was architected to deliver an improvement in display quality and maintain an acceptable battery life.

Are consumer electronics bumping up against edge of what designers can deliver when balancing small form factors, high performance processing and graphics, acceptable device weight, and long enough battery life? Are there design trade-offs that are still available to designers to further push where mobile devices can go while staying within the constraints of acceptable heat, weight, size, cost, and performance? Have you ever dealt with running a warm system that becomes a system that is running too hot? If so, how did you deal with it?

Visit Embedded Insights to see the full conversation occurring across multiple communities about this and other questions of the week.

 

 


0

My first experience with programming involved mailing punch cards to a computer for batch processing. The results of the run would show up about a week later; the least desirable result was finding out there was a syntax error in one of the cards. I moved up in the world when we gained access to a teletype that allowed us to enter the programs directly to the computer; however, neither of these experiences hinted at the true complexity that embedded programming would entail.

The Z80 was the first processor that I worked with that truly exposed the innards of the processor to me. A key reason for this was the substantial hobbyist community that had grown up around the Z80. I had (and still have in storage) a cornucopia of technical documents that exposed in detail every part of the system and ways to use them effectively. When I look back on those memories I marvel at the amount of information that was available despite the lack of any online connectivity – or in other words, no internet.

I found significant value in being able to examine other people’s code in real use applications. Today’s development support often includes application notes and sample code that addresses a wide range of use cases for a target processor. Online developer communities provide a valuable opportunity for developer’s to find example material, but even better, be able to query the community for examples of how to address a specific function with that target processor.

I would like to confirm that the specific capabilities of the processor are less important (because they all provide a minimum good set of functions) and that good development tools, tutorials and sample code, as well as responsive developer community support are more critical to a beginner.

Which processor (or processors) do you find to be beginner friendly or provide the right set of development support that make getting started with the processor faster and easier? Does using an RTOS, operating system, and/or middleware make this easier or harder? Which processors are the best examples of the type of developer community support you find most valuable?

Visit Embedded Insights to see the full conversation occurring across multiple communities about this and other questions of the week.

 

 


0

RNGs (Random number generators) have been used across a wide range of applications for many decades. They can be implemented in a variety of forms. Pure software algorithms enable using a specific sequence of “random” numbers at a later time, such as when performing simulation functions or debugging a system, by tracking the same seed value for the algorithm. Some processors include a hardware random number generator to provide as sequence of numbers that are as close to a true random sequence as possible.

However, the suitability of a sequence of random numbers can vary based on the context of the application that is using them. For example, devices that select random tracks of music to play have undergone an evolution from a true random sequence to one that strips out repetitive appearances of the same number that occur too close to one another in the sequence.

Are random number generators a solved function for system developers? Because not all RNGs are equal in their randomness, does that affect a porting effort when moving not just from one processor to another, but from one software development toolset to another? Have you been bitten by assumptions about an RNG that turned out to be horribly unsuitable to your application, or are RNGS mature enough that such horror stories are a thing of the past?

Visit Embedded Insights to see the full conversation occurring across multiple communities about this and other questions of the week.

 

 


0

Software refactoring is an activity where software is transformed in such a way that preserves the external behavior while improving the internal software structure. I am aware of software development tools that assist with refactoring application software, but it is not clear whether design teams engage in software refactoring for embedded code – especially for control systems.

Refactoring was not practiced in the projects I worked on; in fact, the team philosophy was to make only the smallest change necessary when working with a legacy system to affect the change needed. First, we never had the schedule or budget needed just to make the software “easier to understand or cheaper to modify.”  Second, changing the software for “cosmetic” purposes could cause an increase in downstream engineering efforts, especially in the area of verifying that the changes did not break the behavior of the system under all relevant operating conditions. Note that many of the control projects I worked on were complex enough that it was difficult just to ascertain whether the system worked properly or just coincidently looked like it did.

Most of the material I read about software refactoring assumes the software targets the application layer of software which is not tightly coupled to a specific hardware target and is implemented in an object oriented language, such as Java or C++. Are embedded developers performing software refactoring? If so, do you perform it on all types of software or are there types of software that you definitely include or exclude from a refactoring effort?

Visit Embedded Insights to see the full conversation occurring across multiple communities about this and other questions of the week.

 

 


1

SuperSpeed USB, or USB 3.0, has been available in certified consumer products for the previous two years. The serial bus specification includes a 5Gbps signal rate which represents a ten-fold increase of the data rate over HIGH-Speed USB. The interface relies on a dual-bus architecture that enables both USB 2.0 and USB 3.0 operations to take place simultaneously, and it provides backward compatibility. Intel recently announced that its upcoming Intel 7 Series Chipset Family for client PCs and Intel C216 Chipset for servers received SuperSpeed USB Certification; this may signal that 2012 is an adoption inflection point for the three year old specification. In addition to providing a ten-fold improvement in data transfers, SuperSpeed USB increases the maximum power available via the bus to devices, supports new transfer types, and includes new power management features for lower active and idle power consumption.

As SuperSpeed USB becomes available on more host-like consumer devices, will the need to support the new interface gain more urgency? Are you looking at USB 3.0 for any of your upcoming projects? If so, what features in the specification are most important to you?

Visit Embedded Insights to see the full conversation occurring across multiple communities about this and other questions of the week.

 

 


0

On many of the projects I worked on it made a lot of sense to implement BISTs (built-in self tests) because the systems either had some safety requirements or the cost of executing a test run of a prototype system was expensive enough that it justified the extra cost of making sure the system was in as good a shape as it could be before committing to the test. A quick search for articles about BIST techniques suggested that it may not be adopted as a general design technique except in safety critical, high margin, or automotive applications. I suspect that my literature search does not reflect reality and/or developers are using a different term for BIST.

A BIST consists of tests that a system can initiate and execute on itself, via software and extra hardware, to confirm that it is operating within some set of conditions. In designs without ECC (Error-correcting code) memory, we might include tests to ensure the memory was operating correctly; these tests might be exhaustive or based on sampling depending on the specifics of each project and the time constraints for system boot up. To test peripherals, we could use loop backs between specific pins so that the system could control what the peripheral would receive and confirm that outputs and inputs matched.

We often employed a longer and a shorter version of the BIST to accommodate boot time requirements. The longer version usually was activated manually or only as part of a cold start (possibly with an override signal). The short version might be activated automatically upon a cold or warm start. Despite the effort we put into designing, implementing, and testing BIST as well as developing responses when a BIST failed, we never actually experienced a BIST failure.

Are you using BIST in your designs? Are you specifying your own test sets, or are you relying on built-in tests that reside in BIOS or third-party firmware? Are BISTs a luxury or a necessity with consumer products? What are appropriate actions that a system might make if a BIST failure is detected?

Visit Embedded Insights to see the full conversation occurring across multiple communities about this and other questions of the week.

 

 


0

I remember when I first learned about this thing called endianess as it pertains to ordering higher and lower order bits for data that consumes more than a single byte of data. The two most common ordering schemes were big and little endian. Big endian stored the most significant bytes ahead of the least significant bytes; little endian stored data in the opposite order with the least significant bytes ahead of the most significant bytes. The times when I was most aware of endianess was when we were defining data communication streams (telemetry data in my case) that transferred data from one system to another that did not use the same type of processors. The other context where knowing endianess mattered was when the program needed to perform bitwise operations on data structures (usually for execution efficiency purposes).

If what I hear from semiconductor and software development tool providers is correct, only a very small minority of developers deal with assembly language anymore. Additionally, I suspect that most designers are not involved in driver development anymore either. With the abstractions that compiled languages and standard drivers offer, does endianess affect how software developers write their code? In other words, are you working with datatypes that abstract how the data is stored and used, or are you implementing functions in such a way that require you to know how your data is internally implemented? Have software development tools successfully abstracted this concept away from most developers?

Visit Embedded Insights to see the full conversation occurring across multiple communities about this and other questions of the week.

 

 


0

I have always proposed that the market for 8-bit processors would not fade away – in fact there are still a number of market niches that rely on 4-bit processors (such as clock movements and razor blades that sport a vibrating handle for men when shaving their faces). The smaller processor architectures can support the lowest cost price points and the lowest energy consumption years before the larger 32-bit architectures can begin to offer anything close to parity with the smaller processors. In other words, I believe there are very small application niches that even 8-bit processors are currently too expensive or energy hungry to support just yet.

Many marketing reports have identified that the available software development tool chains play a significant role in whether a given processor architecture is chosen for a design. It seems that the vast majority of resources spent evolving software development tools are focused on the 32-bit architectures. Is this difference in how software development tools for 8- and 32-bit processors are evolving affecting your choice of processor architectures?

I believe the answer is not as straight forward as some processor and development tool providers would want to make it out to be. First, 32-bit processors are generally much more complex to configure than 8-bit processors, so the development environments, which often include drivers and configuration wizards, are nearly a necessity for 32-bit processors and almost a non-issue for 8-bit processors. Second, the type of software that 8-bit processors are used for are generally smaller and contend with less system-level complexity. Additionally, as embedded processors continue to find their way into smaller tasks, the complexity of the software may need to be simpler than current 8-bit software to meet the energy requirements of the smallest subsystems.

Do you feel there is a significant maturity difference between software development tools targeting 8- and 32-bit architectures? Do you think there is/will be a widening gap in the capabilities of software development tools targeting different size processors? Are software development tools affecting your choice of using an 8-bit versus a 32-bit processor or are other considerations, such as the need for additional performance headroom for future proofing, driving your decisions?

Visit Embedded Insights to see the full conversation occurring across multiple communities about this and other questions of the week.

 

 


0

I have witnessed many conversations where someone accuses a vendor of forcing customers to use only their own accessories, parts, or consumables as a way to extract the largest amount of revenue out of the customer base. A non-exhaustive list of examples of such products includes parts for automobiles, ink cartridges for printers, and batteries for mobile devices. While there may be some situations where a company is trying to own the entire vertical market around their product, there is often a reasonable and less sinister explanation for requiring such compliance by the user – namely to minimize the number of ways an end user can damage a product and create avoidable support costs and bad marketing press.

The urban legend that the rock band Van Halen employed a contract clause that required a venue to provide a bowl of M&Ms backstage but with all of the brown candies removed is not only true, but provides an excellent example of such a non-sinister explanation. According to David Lee Roth (the band’s lead singer) autobiography, the bowl of M&Ms with all of the brown candies removed was a nearly costless way for them to test whether the people setting up their stage followed all of the details in their extensive setup and venue requirements. If the band found a single brown candy in the bowl, they ordered a complete line check of the stage before they would agree that the entire stage setup met their safety requirements.

This non-sinister description is consistent with the type of products that I hear people complain that the vendor is merely locking them into the consumables for higher revenues. However, when I examine the details I usually see a machine, such as an automobile, that requires tight tolerances on every part; otherwise small variations in non-approved components can combine to create unanticipated oscillations in the body of the vehicle. In the case of printers, variations in the formula for the ink can gum up the mechanical portions of the system when put through the wide range of temperature and humidity environments that printers are operated in. And for mobile device providers are very keen to keep the rechargeable batteries in their products from exploding and hurting their customers.

First, do you employ some clever “Brown M&M” in your design that helps to signal when components may or may not play together well? This could be as simple as performing a version check of the software before allowing the system to go into full operation. Or is the concept of “Brown M&Ms” just a story to cover greedy practices by companies?

Visit Embedded Insights to see the full conversation occurring across multiple communities about this and other questions of the week.

 

 


0

My daughter received a Nintendo 3DS for the holidays. I naively expected the 3D portion of the hand held gaming machine to be a 3D display in a small form factor. Wow, was I wrong. The augmented reality games that combine the 3D display with the position and angle of the gaming machine. In other words, what the system displays changes to reflect how you physically move the game machine around.

Another use of embedded accelerometers and/or gyroscopes that I have heard about is to enable the system to protect itself when it is dropped. When the system detects that it is falling, it has a brief moment where it tries to lock down the mechanically sensitive portions of the system so that when it impacts the ground it incurs a minimum of damage to sensitive components inside the system.

Gyroscopes can be used to stabilize images viewed/recorded via binoculars and cameras by detecting jitter in the way the user is holding the system and making adjustments to the sensor subsystem.

As the price of accelerometers and gyroscope continue to benefit from the scale of being adopted in gaming systems, the opportunities for including them in other embedded systems improve. Are you using accelerometers and/or gyroscopes in any of your designs? Are you aware of any innovative forms of inertial sensing and processing that might provide inspiration for new capabilities for other embedded developers?

Visit Embedded Insights to see the full conversation occurring across multiple communities about this and other questions of the week.

 

 


0

A bill was recently introduced to mandate that high school students apply to college before they can receive their high school diploma. This bill reminded me that I worked with a number of people on embedded projects that did not have any college experience. It also reminded me that when I started new projects, the front end of the project usually involved a significant amount of research to understand not just what the project requirements were but also what the technology options and capabilities were.

In fact, while reminiscing, I realized that most of what I learned to do embedded development was learned on the job. The college education did provide value, but it did not provide specific knowledge related to designing embedded systems. I even learned different programming languages on the job because the ones I used in college were not used in the industry. A concept I learned in college and have found useful over the years, big O notation, has not shown itself to be a topic taught to even half of the people I worked with while building embedded systems. Truth be told, my mentors played a much larger role in my ability to tackle embedded designs than college did.

But then I wonder, was all of this on-the-job learning the result of working in the early, dirty, and wild-west world of embedded systems? Has the college curriculum since adjusted to address the needs of today’s embedded developers? Maybe they have, based on a programing class my daughter recently took in college, because the professor spent some time exposing the class to embedded concepts, but the class was not able to go deeply into any topics as it was an introduction course.

Is a college education necessary to be become an embedded developer today? If so, does the current curriculum sufficiently prepare students for embedded work or is there something that is missing from the course work? If not, what skill sets have you found to be essential for someone to start working with an embedded design team?

Visit Embedded Insights to see the full conversation occurring across multiple communities about this and other questions of the week.


0

Are you using Arduino?

Posted by RobertCravotta Jan 5, 2012

The Gingerbreadtron is an interesting example of an Arduino project. A Gingerbreadtron is a gingerbread house that transforms into a robot. The project was built using an Arduino Uno board and six servo motors. Arduino is an open-source electronics prototyping platform intended for artists, designers, hobbyists, and anyone interested in creating interactive objects or environments. The project began in 2005 and there are claims that over 300,000 Arduino units are “in the wild.” 

According to the website, developers can use Arduino to develop interactive objects, taking inputs from a variety of switches or sensors, and controlling a variety of lights, motors, and other physical outputs. Arduino projects can be stand-alone, or they can communicate with software running on a computer. The boards can be assembled by hand or purchased preassembled; the open-source IDE can be downloaded for free. The Arduino programming language is an implementation of Wiring, a similar physical computing platform, which is based on the Processing multimedia programming environment.

My first exposure to the platform was from a friend that was using the platform to offer a product to control a lighting system. Since then, I see more examples of people using the platform in hobby projects – which leads to this week’s question – Are you using Arduino for any of your projects or production products? Is a platform that provides a layer of abstraction over the microcontroller sufficient for hardcore embedded designs, or is it a tool that allows developers that are not experts at embedded designs to more easily break into building real-world control systems?

Visit Embedded Insights to see the full conversation occurring across multiple communities about this and other questions of the week.

 

 


0

Around this time of year many people like to publish their predictions for the next year – and according to an article, the “experts and analysts” do not see a lot of innovation coming out of the United States soon. The article mentions and quotes a number of sources that suggest the rate of innovation is going to be sluggish the next few years. One source suggested that "bigger innovation labs and companies are holding back on numerous innovations until they can properly monetize them."

I wonder if these observations and expectations are realistic. I see innovation every time I see some capability available for less cost, training, or skill than before. I am constantly amazed at the speed at which new technology reaches the hands of people in the lowest quartile of income. More significantly, I am amazed at how these new technologies appear in everyday activities without a fanfare. For example, my daughter who is learning to drive has pointed out features that she really likes about the car she is driving – features I never gave any thought about either because I did not notice them or because noticing them would be analogous to noticing and commenting on the air we breathe.

My daughter received a Nintendo 3DS as a present this Christmas. The 3D part of this product goes far beyond the display as it enables her to move the device around and interact with the software in new and meaningful ways. These “invisible” types of innovations do not seem to make big headlines, but I suspect they are still sources of technology disruptions.

As for a company holding off on an innovation, is such a thing possible in a highly competitive world? Can any company afford to hold off on an innovative idea and risk another company beating them to the punch in the market?

Is the rate of innovation stagnating? Is the marketing hype around innovation just not getting the return on investment and so companies are backing off on how they hype it? Are you aware of anyone holding back on innovative ideas waiting for a better consumer market to release them?

Visit Embedded Insights to see the full conversation occurring across multiple communities about this and other questions of the week.


0

The current two month payroll tax at the center of a budget bill going through the US Congress has elicited a response by a trade organization representing the people that would have to implement the new law and is the inspiration for this week’s question. A payroll processing trade organization has claimed that even if the bill became law, it is logistically impossible to make the changes in tax software before the two month extension expires. The trade organization claims the changes required by the bill would require at least 90 days for software testing alone in addition to time for analysis, design, coding and implementation. Somehow this scenario makes me think of past conversations where marketing/management would make changes to a system and engineering would push back because there was not enough time to properly perform the change and testing before the delivery date.

If you are part of the group requesting the “simple” change, you may think the developers are overstating the complexity of implementing the change. Often, in my experience, there is strong merit to the developer’s claims because the “simple” change involves some non-obvious complexity especially when the change affects multiple parts of the system.

In my own experience, we worked on many R&D projects, most with extremely aggressive time schedules and engineering budgets. These were quick and dirty proof of concepts many times and “simple” changes did not have to go through the rigorous production processes – or so the requesters felt. What saved the engineering team on many of these requests was the need to minimize the number of variations between iterations of the prototype so that we could perform useful analysis on the test data in the event of failures. Also, we locked down feature changes to the software during system integration so that all changes were in response to resolving system integration issues.

I suspect this perspective that changes can be made quickly and at low risk has been reinforced by the success of the electronics industry to deliver what appears to be the predictable and mundane advance in silicon products to cost 30% less and/or deliver twice as much performance as the previous year’s parts. Compounding this perception are all of the “science-based” television series that show complex engineering and scientific tasks being completed by one or two people in hours or days when in reality they would take dozens to hundreds of people months to complete.

How long should testing changes to a system take? Is it reasonable to expect any change to be ordered, analyzed, designed, implemented, and tested in less than two weeks? I realize that the length of time will depend on the complexity of the change request, but two weeks seems like an aggressive limit to implement any change which might indirectly affect the operation of the system. That is for embedded systems where the types of changes requested are much more complex than changing the color of a button or moving a message to a different portion of the display. How does your team manage change requests and the time it takes to process and implement them?

Visit Embedded Insights to see the full conversation occurring across multiple communities about this and other questions of the week.


1 2 3 ... 5 Previous Next