Skip navigation

Community

Embedded Insights Question of the Week

9 Posts tagged with the software_techniques tag
0

My first experience with programming involved mailing punch cards to a computer for batch processing. The results of the run would show up about a week later; the least desirable result was finding out there was a syntax error in one of the cards. I moved up in the world when we gained access to a teletype that allowed us to enter the programs directly to the computer; however, neither of these experiences hinted at the true complexity that embedded programming would entail.

The Z80 was the first processor that I worked with that truly exposed the innards of the processor to me. A key reason for this was the substantial hobbyist community that had grown up around the Z80. I had (and still have in storage) a cornucopia of technical documents that exposed in detail every part of the system and ways to use them effectively. When I look back on those memories I marvel at the amount of information that was available despite the lack of any online connectivity – or in other words, no internet.

I found significant value in being able to examine other people’s code in real use applications. Today’s development support often includes application notes and sample code that addresses a wide range of use cases for a target processor. Online developer communities provide a valuable opportunity for developer’s to find example material, but even better, be able to query the community for examples of how to address a specific function with that target processor.

I would like to confirm that the specific capabilities of the processor are less important (because they all provide a minimum good set of functions) and that good development tools, tutorials and sample code, as well as responsive developer community support are more critical to a beginner.

Which processor (or processors) do you find to be beginner friendly or provide the right set of development support that make getting started with the processor faster and easier? Does using an RTOS, operating system, and/or middleware make this easier or harder? Which processors are the best examples of the type of developer community support you find most valuable?

Visit Embedded Insights to see the full conversation occurring across multiple communities about this and other questions of the week.

 

 


0

RNGs (Random number generators) have been used across a wide range of applications for many decades. They can be implemented in a variety of forms. Pure software algorithms enable using a specific sequence of “random” numbers at a later time, such as when performing simulation functions or debugging a system, by tracking the same seed value for the algorithm. Some processors include a hardware random number generator to provide as sequence of numbers that are as close to a true random sequence as possible.

However, the suitability of a sequence of random numbers can vary based on the context of the application that is using them. For example, devices that select random tracks of music to play have undergone an evolution from a true random sequence to one that strips out repetitive appearances of the same number that occur too close to one another in the sequence.

Are random number generators a solved function for system developers? Because not all RNGs are equal in their randomness, does that affect a porting effort when moving not just from one processor to another, but from one software development toolset to another? Have you been bitten by assumptions about an RNG that turned out to be horribly unsuitable to your application, or are RNGS mature enough that such horror stories are a thing of the past?

Visit Embedded Insights to see the full conversation occurring across multiple communities about this and other questions of the week.

 

 


0

Software refactoring is an activity where software is transformed in such a way that preserves the external behavior while improving the internal software structure. I am aware of software development tools that assist with refactoring application software, but it is not clear whether design teams engage in software refactoring for embedded code – especially for control systems.

Refactoring was not practiced in the projects I worked on; in fact, the team philosophy was to make only the smallest change necessary when working with a legacy system to affect the change needed. First, we never had the schedule or budget needed just to make the software “easier to understand or cheaper to modify.”  Second, changing the software for “cosmetic” purposes could cause an increase in downstream engineering efforts, especially in the area of verifying that the changes did not break the behavior of the system under all relevant operating conditions. Note that many of the control projects I worked on were complex enough that it was difficult just to ascertain whether the system worked properly or just coincidently looked like it did.

Most of the material I read about software refactoring assumes the software targets the application layer of software which is not tightly coupled to a specific hardware target and is implemented in an object oriented language, such as Java or C++. Are embedded developers performing software refactoring? If so, do you perform it on all types of software or are there types of software that you definitely include or exclude from a refactoring effort?

Visit Embedded Insights to see the full conversation occurring across multiple communities about this and other questions of the week.

 

 


0

On many of the projects I worked on it made a lot of sense to implement BISTs (built-in self tests) because the systems either had some safety requirements or the cost of executing a test run of a prototype system was expensive enough that it justified the extra cost of making sure the system was in as good a shape as it could be before committing to the test. A quick search for articles about BIST techniques suggested that it may not be adopted as a general design technique except in safety critical, high margin, or automotive applications. I suspect that my literature search does not reflect reality and/or developers are using a different term for BIST.

A BIST consists of tests that a system can initiate and execute on itself, via software and extra hardware, to confirm that it is operating within some set of conditions. In designs without ECC (Error-correcting code) memory, we might include tests to ensure the memory was operating correctly; these tests might be exhaustive or based on sampling depending on the specifics of each project and the time constraints for system boot up. To test peripherals, we could use loop backs between specific pins so that the system could control what the peripheral would receive and confirm that outputs and inputs matched.

We often employed a longer and a shorter version of the BIST to accommodate boot time requirements. The longer version usually was activated manually or only as part of a cold start (possibly with an override signal). The short version might be activated automatically upon a cold or warm start. Despite the effort we put into designing, implementing, and testing BIST as well as developing responses when a BIST failed, we never actually experienced a BIST failure.

Are you using BIST in your designs? Are you specifying your own test sets, or are you relying on built-in tests that reside in BIOS or third-party firmware? Are BISTs a luxury or a necessity with consumer products? What are appropriate actions that a system might make if a BIST failure is detected?

Visit Embedded Insights to see the full conversation occurring across multiple communities about this and other questions of the week.

 

 


0

Developers have been designing and building multi-processor systems for decades. New multicore processors are entering the market on a regular basis. However, it seems that the market for new development tools that help designers analyze, specify, code, test, and maintain software targeting multi-processor systems is lagging further and further behind the hardware offerings.

A key function of development tools is to help abstract the complexity that developers must deal with to build the systems they are working on. The humble assembler abstracted the zeros and ones of machine code into more easily remembered mnemonics that enabled developers to build larger and more complex programs. Likewise, compilers have been evolving to provide yet another important level of abstraction for programmers and have all but replaced the use of assemblers for the vast majority of software projects. A key value of operating systems is that it abstracts the configuration, access, and scheduling of the increasing number of hardware resources available in a system from the developer.

If multicore and multi-processor designs are to experience an explosion in use in the embedded and computing markets, it seems that development tools should provide more abstractions to simplify the complexity of building with these significantly more complex processor configurations.

In general, programming languages do not understand the concept of concurrency, and the extensions that do exist usually require the developer to identify the concurrency and explicitly identify where and when such concurrency exists. Developing software as a set of threads is an approach for abstracting concurrency; however, it is not clear how using a threading design method will be able to scale as systems approach ever larger numbers of cores within a single system. How do you design a system with enough threads to occupy more than a thousand cores – or is that the right question?

What tools do you use when programming a multicore or multi-processor system? Does your choice of programming language and compiler reduce your complexity in such designs or does it require you to actively engage more complexity by explicitly identifying areas for parallelism? Do your debugging tools provide you with adequate visibility and control of a multicore/multi-processor system to be able to understand what is going on within the system without requiring you to spend ever more time at the debugging bench with each new design? Does using a hypervisor help you, and if so, what are the most important functions you look for in a hypervisor?

Visit Embedded Insights to see the full conversation occurring across multiple communities about this and other questions of the week.

 

 

 

 

 

 


0

Despite all the different embedded designs I worked on, one of the projects that stands out the most is the first embedded project I worked on – despite the fact that I already had ten years of experience with programming computers before that. I had received money for writing simulators, database engines, an assembler, a time share system, as well as several automation tools for production systems. All of these projects executed on mainframe systems or desktop computers. None of them quite prepared me for how different working on an embedded design is.

My first embedded design was a simple box that would reside on a ground equipment test rack that supported the flight system we were building and demonstrating. There was nothing particularly special about this box – it had a number of input and select lines and it had a few output lines. What surprised me most when putting it through its first checkout tests was how clueless I was as to how to troubleshoot the problems that did arise.

While I was aware of keyboard debounce routines from using my desktop system, I had never had to so completely understand the characteristics of different types of switches before. I had never before had to be aware of the wiring within the system, nor had I ever even considered doing an end-to-end check on every wire in a system ever before. While putting this simple box together, I became aware of so many new ways a design could go wrong that I had never had to consider in my earlier designs.

On top of the new ways that the system could behave incorrectly, the system had no file system, no display system, and no way to print out a trace log or memory dump. This made debugging a very different experience. Printf statements would be of no use, and there was no single-step debugger available. Worse yet, running the target program on my desktop computer, so that it could simulate the code, was mostly useless because I could not bring the real-world inputs and outputs that the box worked with into the desktop system.

As I tackled each debugging issue, I went from a befuddled state of having no idea how to proceed to a state where I adopted new ways of thinking that let me gain the insights I needed to infer how the system was (or was not) working and what needed to change. I worked on that project alone, and it welcomed me into the world of embedded design and working with real world signals with wide open arms.

How did your introduction to embedded systems go? What insights can you share to warn those that are entering the embedded design community about how designing, debugging, and integrating embedded components is different from writing application-level software?

Visit Embedded Insights to see the full conversation occurring across multiple communities about this and other questions of the week.

0

I first explored the opportunity of using the Eclipse and Net Beans open source projects as a foundation for embedded software development tools in an article a few years back. Back then these Java-based IDEs (Integrated Development Environments) were squarely targeting application developers, but the embedded community was beginning to experiment with using these platforms for their own development tools. Since then, many companies have built and released Eclipse-based development tools - and a few have retained using their own IDE.

This week’s question is an attempt to start evaluating how theses open source development platforms are working out for embedded suppliers and developers. In a recent discussion with IAR Systems, I felt like the company’s recent announcement about an Eclipse plug-in for the Renesas RL78 was driven by developer request. IAR also supports its own proprietary IDE – the IAR Embedded WorkBench. Does a software development tools company supporting two different IDEs signal something about the open source platform?

In contrast, Microchip’s MPLAB X IDE is based on the Net Beans platform – effectively a competing open source platform to Eclipse. One capability that using the open source platform provides is that the IDE supports development on a variety of hosts running Linux, Mac OS, and Windows operating systems.

I personally have not tried using either an Eclipse or Net Beans tool in many years, so I do not know yet how well they have matured over the past few years. I do recall that managing installations was somewhat cumbersome, and I expect that is much better now. I also recall that the tools were a little slower to react to what I wanted to do, and again, today’s newer computers may have made that a non-issue. Lastly, the open source projects were not really built with the needs of embedded developers in mind, so the embedded tools that migrated to these platforms had to conform as best they could to architectural assumptions that were driven by the needs of application developers.

Do you care if an IDE is Eclipse or Net Beans based? Does the open source platform enable you to manage a wider variety of processor architectures from different suppliers in a meaningfully better way? Does it matter to your design-in decision if a processor is supported by one of these platforms? Are tools based on these open source platforms able to deliver the functionality and responsiveness you need for embedded development?

Visit Embedded Insights to see the full conversation occurring across multiple communities about this and other questions of the week.

0

Developing embedded software differs from developing application in many ways. The most obvious difference is that there is usually no display available in embedded systems whereas most application software would be useless without a display to communicate with the user. Another difference is that it can be challenging to know whether the software for an embedded system is performing the correct functions for the right reasons or if it is performing what appear to be proper functions coincidentally. This is especially relevant to closed-loop control systems that include multiple types of sensors in the control loop, such as with fully autonomous systems.

Back when I was building fully autonomous vehicles, we had to build a lot of custom development tools because standard software development tools just did not perform the tasks we needed. Some of the system-level simulations that we used were built from the ground up. These simulations modeled the control software, rigid body mechanics, and inertial forces from actuating small rocket engines. We built a hardware-in-the-loop rig so that we could swap in and out real hardware with simulated modules so that we could verify the operation of each part of the system as well as inject faults into the system to see how it would fare. Instead of a display or monitor to provide feedback to the operator, the system used a telemetry link which allowed us to effectively instrument the code and capture the state of the system at regular points of time.

Examining the telemetry data was cumbersome due to the massive volume of data – not unlike trying to perform debugging analysis with today’s complex SOC devices. We used a custom parser to extract the various data channels that we wanted to examine together and then used a spreadsheet application to scale and manipulate the raw data and to create plots of the data that we were looking for correlations in. If I was working on a similar project today, I suspect we would still be using a lot of same types of custom tools as back then. I suspect that the market for embedded software development tools is so wide and fragmented that it is difficult for a tools company to justify creating many tools that meet the unique needs of embedded systems. Instead, there is much more money available from the application side of the software development tool market, and it seems that embedded developers must choose between figuring out how to use tools that address the needs of application software work in their project or to create and maintain their own custom tools.

In your own projects, are standard tools meeting your needs or are you using custom or in-house development tools? What kind of custom tools are you using and what problems do they help you solve?

Visit Embedded Insights to see the full conversation occurring across multiple communities about this and other questions of the week.

0

IBM’s Watson computer system recently beat two of the strongest Jeopardy players in the world in a real match of Jeopardy. The match was the culmination of four years of work by IBM researchers. This week’s question has a dual purpose – to focus discussion on how the Watson innovations can/will/might affect the techniques and tools available to embedded developers – and to solicit questions from you that I can ask the IBM research team when I meet up with them (after the main media furor dies down a bit).

The Watson computing system is the latest example of innovations in extreme processing problem spaces. The NOVA’s video “Smartest Machine on Earth” provides a nice overview of the project and the challenges that the researchers faced while getting Watson ready to compete against human players in the game Jeopardy. While Watson is able to interpret the natural language wording of Jeopardy answers and tease out appropriate responses for the questions (Jeopardy provides answers and contestants provide the questions), it was not clear from the press material or the video that Watson was performing processing of natural language in audio form or only text form. A segment near the end of the NOVA video casts doubt on whether Watson was able to work with audio inputs.

In order to bump Watson’s performance into the champion “cloud” (a distribution presented in the video of the performance of Jeopardy champions), the team had to rely on machine learning techniques so that the computing system could improve how it recognizes the many different contexts that apply to words.Throughout the video, we see that the team kept adding more pattern recognition engines (rules?) to the Watson software so that it could handle different types of Jeopardy questions. A satisfying segment in the video was when Watson was able to change its weighting engine for a Jeopardy category that it did not understand after receiving the correct answers of four questions in that category – much like a human player would refine their understanding of a category during a match.

Watson uses 2800 processors, and I estimate that the power consumption is on the order of a megawatt or more. This is not a practical energy footprint for most embedded systems, but the technologies that make up this system might be available to distributed embedded systems if they can connect to the main system. Also, consider that the human brain is a blood-cooled 10 to 100 W system – this suggests that we may be able to drastically improve the energy efficiency of a system like Watson in the coming years.

Do you think this achievement is huff and puff? Do you think it will impact the design and capabilities of embedded systems? For what technical questions would you like to hear answers from the IBM research team in a future article?

Visit Embedded Insights to see the full conversation occurring across multiple communities about this and other questions of the week.