Review: The Barr Group’s “Embedded Software Training in a Box”

I first heard about the Barr Group’s “Embedded Software Training in a Box” at Dan Saks’ CppCon talk “extern C: Talking to C Programmers about C++”. During the Q&A portion of the talk, someone had asked about breaking into the embedded software industry. Dan suggested the Barr Group’s course as a possible first step. I’ve worked in the embedded software world for a while. And I recognize weak areas in my skillset. Always looking for opportunities to learn and improve, I decided to give this course a spin.

The Barr Group’s “Embedded Software Training in a Box” is a cheaper version of their four day, in-person “Embedded Software Boot Camp”. Both courses share the same material and the same exercises, but the boxed kit is self-directed. There is no instructor and no peer support. At the time of this writing, the “Embedded Software Training in a Box” is $899. That’s a whole lot cheaper than $2,399.00, which is what the in-person boot-camp costs – a price difference of about $1500, plus whatever travel expenses you’d otherwise incur.

The Barr Group’s website provides a syllabus for the boot-camp. Major topics you can expect to learn about include idioms for embedded programming with C, memory management, multitasking, and interrupt handling, real-time OS concepts, and a few other odds and ends sprinkled in for good measure.

The Unboxing

The following is a picture of what I received after purchasing the kit.




Within the box are five things – an RX63N Demonstration Kit from Renesas, ($105.19 from DigiKey at the time of this writing), a USB drive, an “Embedded Software Field Manual”, an exercise manual, and a quick-start letter.

The “Embedded Software Field Manual” is the cornerstone of this course. It’s intended to be the primary course material. This surprised me, however, as the field manual turned out to be a giant, spiral-bound print-out of all of the slides used in the boot-camp.

The USB drive contains a lot of things, including exercise projects, datasheets, various interesting magazine articles, a few e-books, and the Windows installer for IAR Embedded Workbench 7.0 – the compiler/IDE of choice for this course.

It’s worth noting that a license for IAR Embedded Workbench isn’t provided with the training material. In order to do any of the exercises, you have to obtain a trial license from IAR. Directions are included for how to do this. And there are actually two flavors of the trial license – a time boxed evaluation (30 days) and a code-size limited, time-unlimited evaluation. The training materials instruct you to choose the “Code size limited”, which is plenty sufficient for the exercises in this course.

The way the course works is that you read through the slides until you hit a “Hands-On” exercise. At that point, you’re redirected to the exercise manual which provides more details. Once you’ve completed the exercise, you return to the slides and keep going. Wash, rinse, repeat. There are nine hands-on exercises in total, and a “Cap-Stone” project at the end that’s intended to tie everything together. With the exception of the processor user-manual, the rest of the documentation on the USB drive is supplementary and can be read separately from the course.

Course Pros

  • The course covers a wide range of topics, including programming idioms, real-time OS concepts and scheduling algorithms, ADCs, UML state charts, interrupts, multitasking approaches, etc.
  • The Renesas demo board contains a variety of different types of components to play with.
  • Getting up and running with the Renesas board and IAR Workbench is super easy.
  • There are plenty of good supplementary articles and books provided on the flash drive.
  • The flow of the course feels very natural.
  • The hands-on projects are somewhat fun. (You can’t go wrong with blinky lights.)
  • Solutions to the exercises are provided.

Course Cons

  • As I said before, the “field manual” is just a collection of slides. Slides provide talking points. And without discussion around those talking points, a lot of information is lost. The vast majority of the slides are good. But there are a lot of places where more context is sorely needed. Sometimes acronyms are used without being defined, sometimes formulas are defined without any explanation of what they’re used for, and sometimes graphs are included that are just baffling. I confess I had to turn to YouTube at least twice for clarity on a few topics.
  • Many of the hands-on exercises are virtually impossible for the uninitiated to complete without looking at the solutions.
  • If you’ve never read a data sheet before, you may feel a bit “thrown in the deep end”. The course definitely doesn’t provide a gentle introduction in that regard. When the exercise manual says something like, “Familiarize yourself with such and such 50 pages of the processor data manual”, they really mean read all 50 pages.
  • No discussion of embedded Linux.
  • The tooling used for the course isn’t cross-platform. It requires Windows.
  • No course videos.
  • No online forum to discuss material.

Conclusion

This course is just OK, in my opinion. There are definitely some good nuggets of information. The exposure to μC/OS-III was new to me and I enjoyed that. But overall, I was kind of disappointed. The slide format didn’t really work for me. And the exercises, while interesting, sometimes felt a little disconnected from the “field guide” material.

The course could have benefited greatly from some online videos. At the very least, some discussion forums. For a $899 self-directed course this day and age, one expects such things.

The good news is that at the time of this writing if you’re interested in attending the in-person, boot-camp within a month of purchasing the boxed kit, the Barr Group will refund the cost of the kit so long as you bring it with you to the boot camp.

Do I recommend “Embedded Software Training in a Box”? It depends. If the company you work for has a good training budget and is willing to pay for it, sure, go for it. It’s not bad. If you’re paying out of pocket, however, I suggest maybe looking for alternative learning opportunities first.

Hyrum’s Law

A few mornings ago, I was listening to CppCast Episode #70 on my commute into work. This particular episode featured an interview with Titus Winters. If you’re not familiar with Titus, he currently leads the C++ libraries team at Google and generally has lots to say regarding large codebases and sustainability. You can check out his CppCon talks here, here, and here.

During the interview, Titus shared a small anecdote regarding an attempt by his team to make broad code changes. They discovered weird behavioral dependencies that were never intended to be part of the API’s contract, never documented, and certainly never intended for client applications to rely upon. He followed the anecdote with the following “law”, attributed to his colleague Hyrum. (presumably Hyrum Wright)

“With a sufficient number of users of an interface, it doesn’t matter what you promised in the interface contracts, all observable behaviors of your class or function or whatnot will be depended upon by somebody.”

Titus calls this Hyrum’s Law and it’s a trueism if there ever was one.

Most developers learn to use an API or library by experimentation, regardless of whether there’s documentation to lean on. It turns out that the human brain has an uncanny need to actually apply a solution to a given problem before fully appreciating and understanding the problem. Attempting to use someone else’s software component in your own code is where the real learning starts. And when a moment of success is achieved, it’s often where the learning stops.

As I see it, there are really two major contributors to Hyrum’s Law. The first is that once some minimal amount of success is achieved, developers working with an API or library don’t always verify that a given component’s contract matches their expectations or usage. We’ve got other things to do, after all. And besides, what we’re doing makes total sense and is not at all weird, right? Umm….maybe? Maybe not? The second is that interface authors aren’t always comprehensive when it comes to contracts. Sometimes there are side-effects and pre/post conditions that escape our attention or we may make naïve assumptions about our users. For instance, threading restrictions, “undefined” behavior guarantees, object lifetime, optional vs. required client data, thrown exception types, etc. are all things that sometimes get glossed over when it comes time to document the contract. As such, users of our components can only make assumptions about things we aren’t explicit about.

As users of others’ code, we must be careful not to assume our usage of an interface is as the author intended. When in doubt (and when possible), verify. And if there’s disparity between the contract and actual behavior, please notify the author!

As authors of interfaces, we must be careful to document and make explicit the intended contract. Of course, that’s often easier said than done. It’s hard to be comprehensive, but we should give it our best shot. And we should be prepared to refine the contract at opportune times as we learn where the holes are.

C++: Heed the Signs!

We’ve all seen this warning at some point…

warning: comparison between signed and unsigned integer expressions [-Wsign-compare]

Comparisons and arithmetic operations involving a mix of signed and unsigned numbers creep into code all the time. And when it does, compilers will sometimes produce helpful warnings such as the one shown above.

But what do you do if you see such a warning? Maybe you say to yourself, “What’s the big deal? The compiler should be able to figure out how to deal with mixed signage, right?” You then reassure yourself that it’s in-fact not a big deal, the compiler is smarter than you, and a blind eye is turned. If you’re feeling particularly cocky, you may even disable that compiler warning altogether.

But what are you ignoring? Could there be something sinister lurking in the dark, waiting to strike when you’re not paying attention?

Treading Into Murky Water

Take a look at this snippet of code.

#include <iostream>
int main(int argc, char **argv)
{
    unsigned int a = 1;
    signed int b = -1;
 
    if (b < a)
        std::cout << "All is right with the world.\n";
    else
        std::cout << "Up is down and down is up!\n";
 
    return 0; 
}

The variable a, which is unsigned, is initialized to 1. The variable b, which is signed, is initialized to -1. And b is obviously less than a, right?

Both Visual Studio and GCC compile the code with a warning similar to that shown above. Running the code produces the following output.

Up is down and down is up!

“Hey now! That’s madness!” you exclaim.

Perhaps. But before I explain what’s happening, let’s journey a little farther down the rabbit hole with one more example…

#include <iostream>
int main(int argc, char **argv)
{
    unsigned int a = 1;
    signed int b = -1;
 
    std::cout << "1 + -1 = " << (a + b) << "\n";
 
    b = -2;
 
    std::cout << "1 + -2 = " << (a + b) << "\n";
 
    return 0; 
}

In this example, both Visual Studio and GCC compile the code without error and without warning (even with all warnings turned on). The following output is produced.

1 + -1 = 0
1 + -2 = 4294967295

“What in the world is going on here?!” In short – unsigned promotions.

Let’s see what the C++ standard has to say. You’ll find the following excerpt in Section 5, “Expressions”.

Many binary operators that expect operands of arithmetic or enumeration type cause conversions and yield result types in a similar way. The purpose is to yield a common type, which is also the type of the result. This pattern is called the usual arithmetic conversions, which are defined as follows:

  • If either operand is of scoped enumeration type, no conversions are performed; if the other operand does not have the same type, the expression is ill-formed.
  • If either operand is of type long double, the other shall be converted to long double.
  • Otherwise, if either operand is double, the other shall be converted to double.
  • Otherwise, if either operand is float, the other shall be converted to float.
  • Otherwise, the integral promotions shall be performed on both operands. Then the following rules shall be applied to the promoted operands:
    • If both operands have the same type, no further conversion is needed.
    • Otherwise, if both operands have signed integer types or both have unsigned integer types, the operand with the type of lesser integer conversion rank shall be converted to the type of the operand with greater rank.
    • Otherwise, if the operand that has unsigned integer type has rank greater than or equal to the rank of the type of the other operand, the operand with signed integer type shall be converted to the type of the operand with unsigned integer type.
    • Otherwise, if the type of the operand with signed integer type can represent all of the values of the type of the operand with unsigned integer type, the operand with unsigned integer type shall be converted to the type of the operand with signed integer type.
    • Otherwise, both operands shall be converted to the unsigned integer type corresponding to the type of the operand with signed integer type.

In short, the last conversion rule is applied if none of the other rules apply – “..both operands shall be converted to the unsigned integer type corresponding to the type of the operand with signed integer type.”

In our first example, the variable b is converted to an unsigned int during the comparison to a. What happens when you convert the value -1 to an unsigned value? It’s the same value as UINT_MAX, which on a platform with 32-bit integers equates to 4294967295. And, of course, that’s not smaller than 1 which is why we saw the output we did.

What about the second example? The first line of output seemed to work just fine. Only the second line produced unexpected results.

In the first line of output, we saw:

1 + -1 = 0

Here, a was equal to 1 and b was equal to -1. The code for this was…

std::cout << "1 + -1 = " << (a + b) << "\n";

However, if we substitute the values in for (a + b), applying the unsigned promotion to b, we have the following (assuming 32-bit integers)…

std::cout << "1 + -1 = " << (1 + 4294967295) << "\n";

In this particular case, adding 1 to 4294967295 overflows resulting in a value of 0. What’s especially interesting here is that 0 was our expected result, so this code actually provides us with a false sense that everything is working as it should.

It was only after we set b to a value of -2 that we saw weird things happen. Once again, here’s what the code looks like once b is promoted.

std::cout << "1 + -2 = " << (1 + 4294967294) << "\n";

And this, of course, equals 4294967295.

The Takeaway

Many C++ experts are adamant that the only time you should ever use an unsigned data type is when you need to store a bitmask. Signed data types should be used in ALL other cases. If you end up needing a value that exceeds the range of a given signed data type, use a bigger signed data type.

I agree with the sentiment of this. But the reality is that we don’t always have the luxury of picking our data types. We’re often at the mercy of APIs, sensor specifications, file formats, network protocols, etc.

Any time you encounter or write code that mixes signed and unsigned data types, proceed with caution. Think carefully about how the data is used, and apply a healthy dose of skepticism. And, of course, when in doubt, test, test, test.

* The STL makes heavy use of size_t, which is an unsigned type. When brought up in conversation, this point is often met with loud and uncomfortable groans. The general feeling is that allowing size_t into the STL was a mistake. But it’s something we’re stuck with for now.