Thursday, December 26, 2019

Implications and practical applications for AI and ML in embedded systems

"Civilization advances by extending the number of important operations we can perform without thinking about them." —Alfred North Whitehead, British mathematician, 1919

Hailed as a truly transformational technology, artificial intelligence (AI) is positioned to disrupt businesses either by enabling new approaches to solving complex problems, or threatening the status quo for whole business sectors or types of jobs. Whether you understand what the excitement is all about and how it will be applied to your market, or you struggle to understand how you might take advantage of the technology, having some basic understanding of artificial intelligence and its potential applications has to be part of your strategic planning process.

RELATED CONTENT: Be (AI) smarter about your digital transformation

Despite the hype, it is sobering to remember that artificial intelligence is not a magic trick that can do anything; it's a tool with which a magician can do a few tricks. In the below article, I discuss the current landscape and outline some considerations of how artificial intelligence may be applied to embedded systems, with a focus on how to plan for deployment in these more constrained environments.

Definitions and basic principlesAI is a computer science discipline looking at how computers can be used to mimic human intelligence. AI has existed since the dawn of computing in the 20th Century, when pioneers such as Alan Turing foresaw the possibility of computers solving problems in ways similar to how humans might do so.

Classical computer programming solves problems by encoding algorithms explicitly in code, guiding computers to execute logic to process data and compute an output. In contrast, Machine Learning (ML) is an AI approach that seeks to find patterns in data, effectively learning based on the data. There are many ways in which this can be implemented, including pre-labeling data (or not), reinforcement learning to guide algorithm development, extracting features through statistical analysis (or some other means), and then classifying input data against this trained data set to determine an output with a stated degree of confidence.

Deep Learning (DL) is a subset of ML that uses multiple layers of neural networks to iterativelytrain a model from large data sets. Once trained, a model can look at new data sets to make an inference about the new data. This approach has gained a lot of recent attention, and has been applied to problems as varied as image processing and speech recognition, or financial asset modeling. We see this approach also having a significant impact in future critical infrastructure and devices.

Applying ML/DL in embedded systemsDue to the large data sets required to create accurate models, and the large amount of computing power required to train models, training is usually performed in the cloud or high-performance computing environments. In contrast, inference is often applied in devices close to the source of data. Whereas distributed or edge training is a topic of great interest, it is not the way in which most ML systems are deployed today. For the sake of simplicity, let's assume that training takes place in the cloud, and inference will take place at the edge or in-device.

As we've described, ML and DL are data-centric disciplines. As such, creating and training models requires access to large data sets, and tools and environments that provide a rich environment for data manipulation. Frameworks and languages that ease the manipulation of data, and implement complex math libraries and statistical analysis, are used. Often these are language frameworks such as Python, on which ML frameworks are then built. There are many such frameworks, but some common ones include TensorFlow, Caffe or PyTorch.

ML frameworks can be used for model development and training, and can also be used to run inference engines using trained models at the edge. A simple deployment scenario is therefore to deploy a framework such as TensorFlow in a device. As these require rich runtime environments, such as Python, they are best suited to general-purpose compute workloads on Linux. Due to the need to run ML in mobile devices, we're seeing a number of lighter-weight inference engines (TensorFlow Lite, PyTorch mobile) starting to be developed that will require less resources, but these are not yet widely available or as mature as their full-featured parents.

ML is highly computationally intensive, and early deployments (such as in autonomous vehicles) rely on specialized hardware accelerators such as GPUs, FPGAs or specialized neural networks. As these accelerators become more prevalent in SoCs, we can anticipate seeing highly efficient engines to run DL models in constrained devices. When that happens, another deployment option will be to compile trained models for optimized deployment on DNN accelerators. Some such tools already exist, and require modern compiler frameworks such as LLVM to target the model front-ends, and the hardware accelerator back-ends.

Implications for embedded developmentEmbedded development is often driven by the need to deploy highly optimized and efficient systems. The classical development approach is to start with very constrained hardware and software environments, and add capability only as needed. This has been the typical realm of RTOS applications.

With rapidly changing technologies, we see the development approach starts with making complex systems work, and then optimizing for deployment at a later stage. As with many major advances in software, open source communities are a large driver of the pace and scale of innovation that we see in ML. Embracing tools and frameworks that originate in open source, and often start with development in Linux, is rapidly becoming the primary innovation path. Using both a real-time operating system (RTOS) and Linux, or migrating open source from Linux to an RTOS, are therefore important developer journeys that must be supported.

How ‘John Mulaney and the Sack Lunch Bunch’ Became One of 2019’s Weirdest, Most Wonderful Hours of TV

SPOILER ALERT: Proceed with caution if you have not yet watched "John Mulaney and the Sack Lunch Bunch," streaming now on Netflix.

Children's entertainment based on "existential angst and fear" might not seem like a natural combination on the face of it, but according to John Mulaney, that combination's fueled the genre for as long as he can remember.

"A lot of entertainment that I consumed as a kid had a lot of either melancholy or dread, and it was not some undertone," the comedian tells Variety. "Even things like 'I know an old lady who swallowed a fly' — every part of that [song] is odd and disturbing!" 

And so as he set about building "John Mulaney and the Sack Lunch Bunch," his new variety special for Netflix, co-written by Marika Sawyer ("Saturday Night Live") with music by composer Eli Bolin ("Sesame Street," "Co-Op"). They drew inspiration from musical influences including Burt Bacharach, Howard Ashman and Alan Menken, and Trinidadian calypso legend Mighty Sparrow. They turned to pieces like Maurice Sendak and Carole King's "Really Rosie" and Harry Nilsson's "The Point," which were also fueled by catchy songs and extremely relatable anxieties. "As a kid, we watched movies like 'Little Shop of Horrors' and 'Clue,' and they didn't seem inappropriate — and I don't think they are," Mulaney says. "But they had a lot of tension to them."

They also assembled a cast of preternaturally talented children (the aforementioned "Sack Lunch Bunch") to sing songs about feeling ignored, confused and melancholy. They threw in silly cutaway jokes and wonderfully weird turns from people like David Byrne and Jake Gyllenhaal, and even a brief aside called "Girl Talk with Richard Kind." As Bolin puts it, the special's goal was to follow in the footsteps of their childhood touchstones, which all erred "very funny, and a little dark." 

The only unscripted parts of the special are when they ask each kid castmember about their biggest fears, a choice Mulaney explains as coming out of some personal curiosity. As a kid, he says, "I remember being afraid and having to to just deal with it. And I wondered if that's what they're going through, too."

Another of his and Sawyer's goals was to make a special that both kids and adults could enjoy without condescending to either demographic: "We didn't want it to be 5 jokes for kids and 1 joke for adults that wasn't at all for kids," as Mulaney puts it.

"The Sack Lunch Bunch" pokes fun at this particular trope in a sketch about a focus group for "Bamboo 2: Bamboozled," a fictional take on the kind of nightmare animated movie that's become a Hollywood staple. The sketch also pokes fun at these movies' star-studded casts: At one point, Mulaney asks the children if they could tell that "Danny the Dodo" was voiced by "someone, but you couldn't quite place him," before revealing that the answer's Jeremy Renner.

"I always found it bizarre that no expense was spared in getting huge movie stars into animated films," muses Mulaney. "I'd see the poster and be like, 'Does a kid care that it's Luke Wilson?' Angela Lansbury and Jerry Orbach were in 'Beauty and the Beast' and are very respected, but they're not Brad and Angelina." 

With that in mind, the celebrity cameos of "The Sack Lunch Bunch" are extremely specific and catered to their talents. Broadway stars André De Shields and Annaleigh Ashford pop up to anchor tricky musical numbers (as a mysterious math tutor and "white lady sobbing on the street," respectively). Natasha Lyonne joins the kids in sharing her biggest fears (including escalators, temperamental toilets and nuclear disasters). And two of the biggest scene-stealers — aside from the children themselves — are Byrne and Gyllenhaal, each turning in wildly weird and unforgettable performances. 

Byrne's number, performed with Lexi Perkel, has the two fighting for attention from a party full of oblivious adults with magic tricks, flashing lights, and matching "Frozen" costumes. The accompanying prog-rock song sounds like one Byrne could written himself, which was exactly the point. (Lifelong Talking Heads fan Bolin composed it, but says Byrne nonetheless shared some ideas and his own "general personal feelings of alienation.")

"There's a Talking Heads song called 'Warning Sign,' and it has this line in it: 'Hear my voice, it's saying something and I hope you're concentrating,'" recalls Mulaney. "I always liked that line — so we just thought something about a kid's frustration and a little of that very funny David Byrne, 'I'm trying to be polite but I'm frustrated' lyricism would go well together."

All of that feels on point for Byrne's particular brand — but Gyllenhaal's number contrasts his own in such a beautifully unhinged way that it's not at all an exaggeration to say that it might haunt your dreams for weeks after seeing it.

As "Mr. Music," Gyllenhaal crashes in near the end of the special with a press-on mustache, and jacket that doubles as a xylophone. "I'm here to teach you about music!" he cries, eyes wild. He refuses a child's offer of a clarinet ("Put away your skinny trumpet! Instruments are stupid!") before launching into a jaunty calypso tune about how there's "music here, music there, music, music, everywhere!" The only problem is, nothing he uses to demonstrate that fact — whether a pudding cup, leaky faucet, or fancy toilet — ends up making any sounds at all. Mr. Music's ensuing breakdown is one of the most purely bizarre things on TV this year, period, and Gyllenhaal embraces the downward spiral so thoroughly that his mustache comes flying off by the end.

"It was pretty clear what [Mr. Music] was and that he would just be struggling," says Mulaney of the writing process. "[But] until Mr. Gyllenhaal came in, the level of breakdown he was going to have was uncharted."

(Later, when Gyllenhaal asked Mulaney who else they wanted to ask, Mulaney assured him that he was their "first choice in a world where we couldn't have Harry Belafonte.")

This number, unlike every other one in the special, had to be performed and recorded live as Gyllenhaal careened through over a dozen vignettes of failure. "It was really by the seat of our pants," says Bolin — plus, given the premise, "if there's any sound anywhere, you have to start and go back."

But Gyllenhaal was so committed to the song and persona that it all came together, anyway. "When a great actor commits himself to comedy, and I say this very a reluctantly as a comedian's not a great actor," adds Mulaney, "it's funnier than any comedian could ever be. Every take he made a different choice, and the first one when he was pulling notes out of the air and we were like, 'This is the funniest thing we've ever seen and we've worked with many, many comic talents.'" 

Eventually, the kids help a bedraggled Mr. Music relax and have some fun, despite his frustration (and several ridiculous injuries). Their final singalong turns a potentially sour experience into something joyous and hopeful, confirming that "John Mulaney and the Sack Lunch Bunch" is one of the year's the strangest hours of television, with one of the biggest hearts. 

Wednesday, December 25, 2019

Mystery and intrigue at Meyersdale elementary

Meyersdale Area Elementary School students enjoyed a morning of mystery and intrigue for the nine-weeks PBIS reward â€" a magic show by Double Deception. Dan Miller and Gary Weimer amazed and astounded students with a 45-minute show, on Nov. 27, filled with tricks, slight of hand and even an “appearing” act.

Weimer and Miller have been sharing their passion for entertaining for decades with people of all ages throughout the region. Between the two, the men have nearly 75 years experience in magic. They recently joined forces to create Double Deception and now offer shows that delight and amaze their audiences with “twice” the fun.

Double Deception’s show at MAES included audience participation and several students had the opportunity to join the duo on stage. The finale of the morning’s show featured a special assistant â€" Devin Pritts, elementary principal. With careful planning and concentration, Double Deception was able to make Pritts magically “appear” from thin air.

Meyersdale elementary implemented the Positive Behavioral Interventions and Supports (PBIS) as a way for schools to encourage good behavior. With PBIS, schools teach children about behavior, just as they would teach about other subjects like reading or math. The focus of PBIS is prevention, not punishment.

Pritts said these incentive events are highly anticipated by the students and they especially enjoyed the magic show.

“The students were enthralled by every trick and the magicians did a great job of surprising everyone,” he said.

The event was presented, in part, by the Meyersdale Area Educational Boosters. These volunteers work to come up with new and exciting ways to reward children for their efforts.

“It was wonderful to have Double Deception perform a magic show for the Meyersdale elementary students who have demonstrated positive behaviors within our school,” said Boosters’ President Julia Smith. “The students were highly entertained and enjoyed interacting with the magicians during the performance.”

Tuesday, December 24, 2019

Nutcracker comes to Marietta stage tonight

Photo submitted The nutcracker prince leads the charge against the mouse queen as their followers dance about them in the Mid-Ohio Valley Ballet Company's annual performance of "The Nutcracker." Photo submitted Pictured from left to right are Grace Sears as Sugar Plum Fairy, Alia Ott as Clara (seated), Anna Menarchek as Sugar Plum Fairy, Guest Artist Danny Bayer as Herr Drosselmeyer.

Photo submitted Pictured from left to right are Grace Sears as Sugar Plum Fairy, Alia Ott as Clara (seated), Anna Menarchek as Sugar Plum Fairy, Guest Artist Danny Bayer as Herr Drosselmeyer.

Prepared to wow with the magic of the holiday classic, the Mid-Ohio Valley Ballet Company is pulling out all of the stops for its performances today and next week.

"We have a professional company of adult dancers, a guest artist and several juniors and seniors in high school dancing in The Nutcracker this year plus our children in supplemental roles," said Suzie Gunter, head of the company and choreographer.

The youngest dancer in the show is 8-year-old Meredith Craig of Parkersburg, who plays the youngest angel in the iconic Christmas production.

"My favorite part is everything, almost," laughed Craig. "All of the dancers are so pretty, and I love to watch all of Mom's cool tricks. But I love the math too–we have to do so much counting to know when we go up, come in, hold for six seconds, etc."

Craig is the youngest of her family to perform in the tradition, following in the footsteps of both her older sister, Caolinn Craig, 12, who plays a friend of Clara, a soldier in the battle scene and a snowflake; and her mother Jolene Troisi, who plays a mom in the party scene, a coffee dancer and the snow queen.

Photo submitted The nutcracker prince leads the charge against the mouse queen as their followers dance about them in the Mid-Ohio Valley Ballet Company's annual performance of "The Nutcracker."

"That's how it is in a small company, I danced with the Gunters in high school and loved it–we had lots of grants to travel as a company," said Troisi. "Now I teach there and perform with the company… But it's really fun this year, and my 12-year-old is rehearsing with me and all of us getting to be in this together–it's a tradition we want to continue."

"Plus, ballet has all of the motions to show expressions with your hands," added Craig.

Gunter will again return to the stage this year as Mrs. Stahlbaum–a tradition of nearly 40 years with the company.

"The major difference between a ballet and other performances you see in a theater is we don't talk, our dancers' pantomime," she explained.

Troisi's boyfriend Josh Channell, of Parkersburg, is also the creator of the traditional show's new addition this year after working for four years on the production's technical crew.

"It's the first time we'll have an all-hydraulic lifted Christmas tree," added Gunter. "It will grow by magic at its debut at Marietta High School."

The classic's oldest performer is Danny Bayer, 58, of Vienna, who, while not a dancer, has enjoyed working with the ballerinas in the company as not only a well-known local thespian but also a retired schoolteacher.

"I play the role of Drosselmeyer who brings the nutcracker to a party and gives it to Clara, the little girl at the party," explained Bayer. "But he's kind of a magical guy, or is he?"

Bayer said the greatest challenge of rehearsing the production has not been the mixture of ages but of not speaking in his role.

"But it's beautiful to watch," he said. "And my sister has never been to a ballet–I'm excited for her to come. It's a wonderful introduction to ballet, it's easy to follow."

For those who have never seen a ballet, Gunter said not to worry, and please feel welcome.

"We'd love for people to come dressed up–that's exciting for us, but not a requirement," said Gunter.

And Craig added, the production is fun for both genders.

"We actually have a bunch of boys in the show," said the 8-year-old. "It's not just for girls, it's boys, too."

The first production begins tonight at 7:30 in the Marietta High School Auditorium.

Tickets are $12 for adults and $6 for children and senior citizens and can be purchased at the door.

Janelle Patterson can be reached at jpatterson@mariettatimes.com.

If you go:

• What: Mid-Ohio Valley Ballet Company annual performance of "The Nutcracker."

• Dates:

• Today: 7:30 p.m. at the Marietta High School Auditorium.

• Dec. 13: 7:30 p.m. at the Blennerhassett School Auditorium.

• Dec. 14: 7 p.m. at the Ripley High School Auditorium.

• Tickets: $12 for adults, $6 for senior citizens and children.

• To purchase tickets by credit card visit MOVB Studios at 1311 Ann St. Parkersburg, during box office hours: Tuesday 4:30 to 7:30 p.m.

• Or purchase with cash at the door, or during box office hours Thursdays 4:30-7:30 p.m.

Source: MOV Ballet.

Friday, December 20, 2019

Explora demonstrates the math behind magic tricks

Posted: Mar 13, 2018 / 03:08 PM MDT/ Updated: Mar 13, 2018 / 03:11 PM MDT

Explora science museum takes the luck out of magic but offers a magical adults-only event for those who feel lucky.

How can MATH be Magic? Using playing cards and probability games, Explora demonstrates the math behind the 'magic' card tricks in the studio, and offers an opportunity to learn more for a special night geared just for grown-ups only.

Not only can one explore the whole museum's hands-on exhibit activities, test your luck with probability games and lucky superstitions. Are owls bad luck? Meet one from On a Wing and a Prayer and see! Give a dog a lucky day by adopting them through Watermelon Mountain Ranch! Enjoy music from Strictly Commerical, try delicious mac n' cheese from Good & Thorough Foods, and taste beverages from Velvet Coffeehouse.

Also, see the night sky with the Albuquerque Astronomical Society and connect with people all over the world with the High Desert Amateur Radio Club!

For more information, visit the Explora website.

Sunday, December 8, 2019

‘Sesame Street’ teaches kids the magic of the ‘power of yet’

For Justin Baret, the way to Sesame Street was sprinkled with magic. Baret plays Justin the magician in "Sesame Street Live: Make Your Magic." That he shares a first name with his character is a coincidence – and part of the magic, he said.

In the show, the magician is going to put on a show for the neighborhood, and Elmo wants to be a part of it. The problem is Elmo doesn't know magic. So Elmo and friends learn about the "power of yet" – meaning "you can do anything as long as you put your mind to it," Baret said.

Much like Elmo, Baret didn't know magic when he was hired for the show. But now, thanks to practice – and a magician hired to teach the cast – he has more than a few tricks up his sleeve, including making a flower, or 16 foam balls, appear.

"Learning magic was my 'power of yet' story," he said. "It was really, truly, ironically enough a magical experience for me." Being a "Sesame Street" show, there are lots of lessons embedded in the story, all of them tucked into the theme of the "power of yet."

"We also teach the kids that there's magic in everyday life," Baret said. That magic comes out in lessons about the science of shadows, the power of primary colors, counting and baking cookies.

And there's no age limit to learning from the show, Baret said. "The magic that 'Sesame Street' brings is applicable to not only children, but to people my age and people older." He especially sees that in the meet-and-greet sessions cast members do before shows.

"Parents come up and hug Elmo, Cookie Monster or Grover, and thank them for the impact that they had growing up, teaching them something," Baret said. People will also tell them "Sesame Street" helped them learn English or become more confidant in math.

"We have our show focused to children, but it's just so nice to see how the show has helped so many people over so many years," he said. And, yes, Baret was a "Sesame Street" fan when he was a kid. His favorite characters were Elmo and Grover because they're the funniest, he said.

Having Elmo or Grover or Big Bird calling him by name – and that it's his real name – "It's like your wildest childhood dream come to life."

Wednesday, December 4, 2019

How 3D Game Rendering Works: Texturing

In this third part of our deeper look at 3D game rendering, we'll be focusing what can happen to the 3D world after the vertex processing has done and the scene has been rasterized. Texturing is one of the most important stages in rendering, even though all that is happening is the colors of a two dimensional grid of colored blocks are calculated and changed.

The majority of the visual effects seen in games today are down to the clever use of textures -- without them, games would dull and lifeless. So let's get dive in and see how this all works!

As always, if you're not quite ready for a deep dive into texturing, don't panic -- you can get started with our 3D Game Rendering 101. But once you're past the basics, do read on for our next look at the world of 3D graphics.

Let's start simple

Pick any top selling 3D game from the past 12 months and they will all share one thing in common: the use of texture maps (or just textures). This is such a common term that most people will conjure the same image, when thinking about textures: a simple, flat square or rectangle that contains a picture of a surface (grass, stone, metal, clothing, a face, etc).

But when used in multiple layers and woven together using complex arithmetic, the use of these basic pictures in a 3D scene can produce stunningly realistic images. To see how this is possible, let's start by skipping them altogether and seeing what objects in a 3D world can look like without them.

As we have seen in previous articles, the 3D world is made up of vertices -- simple shapes that get moved and then colored in. These are then used to make primitives, which in turn are squashed into a 2D grid of pixels. Since we're not going to use textures, we need to color in those pixels.

One method that can be used, called flat shading, involves taking the color of the first vertex of the primitive, and then using that color for all of the pixels that get covered by the shape in the raster. It looks something like this:

This is obviously not a realistic teapot, not least because the surface color is all wrong. The colors jump from one level to another, there is no smooth transition. One solution to this problem could be to use something called Gouraud shading.

This is a process which takes the colors of the vertices and then calculates how the color changes across the surface of the triangle. The math used is known as linear interpolation, which sounds fancy but in reality means if one side of the primitive has the color 0.2 red, for example, and the other side is 0.8 red, then the middle of the shape has a color midway between 0.2 and 0.8 (i.e. 0.5).

It's relatively simple to do and that's its main benefit, as simple means speed. Many early 3D games used this technique, because the hardware performing the calculations was limited in what it could.

But even this has problems, because if a light is pointing right at the middle of a triangle, then its corners (the vertices) might not capture this properly. This means that highlights caused by the light could be missed entirely.

While flat and Gouraud shading have their place in the rendering armory, the above examples are clear candidates for the use of textures to improve them. And to get a good understanding of what happens when a texture is applied to a surface, we'll pop back in time... all the way back to 1996.

A quick bit of gaming and GPU history

Quake was released some 23 years ago, a landmark game by id Software. While it wasn't the first game to use 3D polygons and textures to render the environment, it was definitely one of the first to use them all so effectively.

Something else it did, was to showcase what could be done with OpenGL (the graphics API was still in its first revision at that time) and it also went a very long way to helping the sales of the first crop of graphics cards like the Rendition Verite and the 3Dfx Voodoo.

Compared to today's standards, the Voodoo was exceptionally basic: no 2D graphics support, no vertex processing, and just the very basics of pixel processing. It was a beauty nonetheless:

It had an entire chip (the TMU) for getting a pixel from a texture and another chip (the FBI) to then blend it with a pixel from the raster. It could do a couple of additional processes, such as doing fog or transparency effects, but that was pretty much it.

If we take a look at an overview of the architecture behind the design and operation of the graphics card, we can see how these processes work.

The FBI chip takes two color values and blends them together; one of them can be a value from a texture. The blending process is mathematically quite simple but varies a little between what exactly is being blended, and what API is being used to carry out the instructions.

If we look at what Direct3D offers in terms of blending functions and blending operations, we can see that each pixel is first multiplied by a number between 0.0 and 1.0. This determines how much of the pixel's color will influence the final appearance. Then the two adjusted pixel colors are either added, subtracted, or multiplied; in some functions, the operation is a logic statement where something like the brightest pixel is always selected.

The above image is an example of how this works in practice; note that for the left hand pixel, the factor used is the pixel's alpha value. This number indicates how transparent the pixel is.

The rest of the stages involve applying a fog value (taken from a table of numbers created by the programmer, then doing the same blending math); carrying out some visibility and transparency checks and adjustments; before finally writing the color of the pixel to the memory on the graphics card.

Why the history lesson? Well, despite the relative simplicity of the design (especially compared to modern behemoths), the process describes the fundamental basics of texturing: get some color values and blend them, so that models and environments look how they're supposed to in a given situation.

Today's games still do all of this, the only difference is the amount of textures used and the complexity of the blending calculations. Together, they simulate the visual effects seen in movies or how light interacts with different materials and surfaces.

The basics of texturing

To us, a texture is a flat, 2D picture that gets applied to the polygons that make up the 3D structures in the viewed frame. To a computer, though, it's nothing more than a small block of memory, in the form of a 2D array. Each entry in the array represents a color value for one of the pixels in the texture image (better known as texels - texture pixels).

Every vertex in a polygon has a set of 2 coordinates (usually labelled as u,v) associated with it that tells the computer what pixel in the texture is associated with it. The vertex themselves have a set of 3 coordinates (x,y,z), and the process of linking the texels to the vertices is called texture mapping.

To see this in action, let's turn to a tool we've used a few times in this series of articles: the Real Time Rendering WebGL tool. For now, we'll also drop the z coordinate from the vertices and keep everything on a flat plane.

From left-to-right, we have the texture's u,v coordinates mapped directly to the corner vertices' x,y coordinates. Then the top vertices have had their y coordinates increased, but as the texture is still directly mapped to them, the texture gets stretched upwards. In the far right image, it's the texture that's altered this time: the u values have been raised but this results in the texture becoming squashed and then repeated.

This is because although the texture is now effectively taller, thanks to the higher u value, it still has to fit into the primitive -- essentially the texture has been partially repeated. This is one way of doing something that's seen in lots of 3D games: texture repeating. Common examples of this can be found in scenes with rocky or grassy landscapes, or brick walls.

Now let's adjust the scene so that there are more primitives, and we'll also bring depth back into play. What we have below is a classic landscape view, but with the crate texture copied, as well as repeated, across the primitives.

Now that crate texture, in its original gif format, is 66 kiB in size and has a resolution of 256 x 256 pixels. The original resolution of the portion of the frame that the crate textures cover is 1900 x 680, so in terms of just pixel 'area' that region should only be able to display 20 crate textures.

We're obviously looking at way more than 20, so it must mean that a lot of the crate textures in the background must be much smaller than 256 x 256 pixels. Indeed they are, and they've undergone a process called texture minification (yes, that is a word!). Now let's try it again, but this time zoomed right into one of the crates.

Don't forget that the texture is just 256 x 256 pixels in size, but here we can see one texture being more than half the width of the 1900 pixels wide image. This texture has gone through something called texture magnification.

These two texture processes occur in 3D games all the time, because as the camera moves about the scene or models move closer and further away, all of the textures applied to the primitives need to be scaled along with the polygons. Mathematically, this isn't a big deal, in fact, it's so simple that even the most basic of integrated graphics chips blitz through such work. However, texture minification and magnification present fresh problems that have to be resolved somehow.

Enter the mini-me of textures

The first issue to be fixed is for textures in the distance. If we look back at that first crate landscape image, the ones right at the horizon are effectively only a few pixels in size. So trying to squash a 256 x 256 pixel image into such a small space is pointless for two reasons.

One, a smaller texture will take up less memory space in a graphics card, which is handy for trying to fit into a small amount of cache. That means it is less likely to removed from the cache and so repeated use of that texture will gain the full performance benefit of data being in nearby memory. The second reason we'll come to in a moment, as it's tied to the same problem for textures zoomed in.

A common solution to the use of big textures being squashed into tiny primitives involves the use of mipmaps. These are scaled down versions of the original texture; they can be generated the game engine itself (by using the relevant API command to make them) or pre-made by the game designers. Each level of mipmap texture has half the linear dimensions of the previous one.

So for the crate texture, it goes something like this: 256 x 256 → 128 x 128 → 64 x 64 → 32 x 32 → 16 x 16 → 8 x 8 → 4 x 4 → 2 x 2 → 1 x 1.

The mipmaps are all packed together, so that the texture is still the same filename but is now larger. The texture is packed in such a way that the u,v coordinates not only determine which texel gets applied to a pixel in the frame, but also from which mipmap. The programmers then code the renderer to determine the mipmap to be used based on the depth value of the frame pixel, i.e. if it is very high, then the pixel is in the far distance, so a tiny mipmap can be used.

Sharp eyed readers might have spotted a downside to mipmaps, though, and it comes at the cost of the textures being larger. The original crate texture is 256 x 256 pixels in size, but as you can see in the above image, the texture with mipmaps is now 384 x 256. Yes, there's lots of empty space, but no matter how you pack in the smaller textures, the overall increase to at least one of the texture's dimensions is 50%.

But this is only true for pre-made mipmaps; if the game engine is programmed to generate them as required, then the increase is never more than 33% than the original texture size. So for a relatively small increase in memory for the texture mipmaps, you're gaining performance benefits and visual improvements.

Below is is an off/on comparison of texture mipmaps:

On the left hand side of the image, the crate textures are being used 'as is', resulting in a grainy appearance and so-called moiré patterns in the distance. Whereas on the right hand side, the use of mipmaps results in a much smoother transition across the landscape, where the crate texture blurs into a consistent color at the horizon.

The thing is, though, who wants blurry textures spoiling the background of their favorite game?

Bilinear, trilinear, anisotropic - it's all Greek to me

The process of selecting a pixel from a texture, to be applied to a pixel in a frame, is called texture sampling, and in a perfect world, there would be a texture that exactly fits the primitive it's for -- regardless of its size, position, direction, and so on. In other words, texture sampling would be nothing more than a straight 1-to-1 texel-to-pixel mapping process.

Since that isn't the case, texture sampling has to account for a number of factors:

  • Has the texture been magnified or minified?
  • Is the texture original or a mipmap?
  • What angle is the texture being displayed at?
  • Let's analyze these one at a time. The first one is obvious enough: if the texture has been magnified, then there will be more texels covering the pixel in the primitive than required; with minification it will be the other way around, each texel now has to cover more than one pixel. That's a bit of a problem.

    The second one isn't though, as mipmaps are used to get around the texture sampling issue with primitives in the distance, so that just leaves textures at an angle. And yes, that's a problem too. Why? Because all textures are images generated for a view 'face on', or to be all math-like: the normal of a texture surface is the same as the normal of the surface that the texture is currently displayed on.

    So having too few or too many texels, and having texels at an angle, require an additional process called texture filtering. If you don't use this process, then this is what you get:

    Here we've replaced the crate texture with a letter R texture, to show more clearly how much of a mess it can get without texture filtering!

    Graphics APIs such as Direct3D, OpenGL, and Vulkan all offer the same range filtering types but use different names for them. Essentially, though, they all go like this:

  • Nearest point sampling
  • Linear texture filtering
  • Anisotropic texture filtering
  • To all intents and purposes, nearest point sampling isn't filtering - this is because all that happens is the nearest texel to the pixel requiring the texture is sampled (i.e. copied from memory) and then blended with the pixel's original color.

    Here comes linear filtering to the rescue. The required u,v coordinates for the texel are sent off to the hardware for sampling, but instead of taking the very nearest texel to those coordinates, the sampler takes four texels. These are directly above, below, left, and right of the one selected by using nearest point sampling.

    These 4 texels are then blended together using a weighted formula. In Vulkan, for example, the formula is:

    The T refers to texel color, where f is for the filtered one and 1 through to 4 are the four sampled texels. The values for alpha and beta come from how far away the point defined by the u,v coordinates is from the middle of the texture.

    Fortunately for everyone involved in 3D games, whether playing them or making them, this happens automatically in the graphics processing chip. In fact, this is what the TMU chip in the 3dfx Voodoo did: sampled 4 texels and then blended them together. Direct3D oddly calls this bilinear filtering, but since the time of Quake and the Voodoo's TMU chip, graphics cards have been able to do bilinear filtering in just one clock cycle (provided the texture is sitting handily in nearby memory, of course).

    Linear filtering can be used alongside mipmaps, and if you want to get really fancy with your filtering, you can take 4 texels from a texture, then another 4 from the next level of mipmap, and then blend all that lot together. And Direct3D's name for this? Trilinear filtering. What's tri about this process? Your guess is as good as ours...

    The last filtering method to mention is called anisotropic. This is actually an adjustment to the process done in bilinear or trilinear filtering. It initially involves a calculation of the degree of anisotropy of the primitive's surface (and it's surprisingly complex, too) -- this value increases the primitive's aspect ratio alters due to its orientation:

    The above image shows the same square primitive, with equal length sides; but as it rotates away from our perspective, the square appears to become a rectangle, and its width increases over its height. So the primitive on the right has a larger degree of anisotropy than those left of it (and in the case of the square, the degree is exactly zero).

    Many of today's 3D games allow you to enable anisotropic filtering and then adjust the level of it (1x through to 16x), but what does that actually change? The setting controls the maximum number of additional texel samples that are taken per original linear sampling. For example, let's say the game is set to use 8x anisotropic bilinear filtering. This means that instead of just fetching 4 texels values, it will fetch 32 values.

    The difference the use of anisotropic filtering can make is clear to see:

    Just scroll back up a little and compare nearest point sampling to maxed out 16x anisotropic trilinear filtering. So smooth, it's almost delicious!

    But there must be a price to pay for all this lovely buttery texture deliciousness and it's surely performance: all maxed out, anisotropic trilinear filtering will be fetching 128 samples from a texture, for each pixel being rendered. For even the very best of the latest GPUs, that just can't be done in a single clock cycle.

    If we take something like AMD's Radeon RX 5700 XT, each one of the texturing units inside the processor can fire off 32 texel addresses in one clock cycle, then load 32 texel values from memory (each 32 bits in size) in another clock cycle, and then blend 4 of them together in one more tick. So, for 128 texel samples blended into one, that requires at least 16 clock cycles.

    Now the base clock rate of a 5700 XT is 1605 MHz, so sixteen cycles takes a mere 10 nanoseconds. Doing this for every pixel in a 4K frame, using just one texture unit, would still only take 70 milliseconds. Okay, so perhaps performance isn't that much of an issue!

    Even back in 1996, the likes of the 3Dfx Voodoo were pretty nifty when it came to handling textures. It could max out at 1 bilinear filtered texel per clock cycle, and with the TMU chip rocking along at 50 MHz, that meant 50 million texels could be churned out, every second. A game running at 800 x 600 and 30 fps, would only need 14 million bilinear filtered texels per second.

    However, this all assumes that the textures are in nearby memory and that only one texel is mapped to each pixel. Twenty years ago, the idea of needing to apply multiple textures to a primitive was almost completely alien, but it's commonplace now. Let's have a look at why this change came about.

    Lighting the way to spectacular images

    To help understand how texturing became so important, take a look at this scene from Quake:

    It's a dark image, that was the nature of the game, but you can see that the darkness isn't the same everywhere - patches of the walls and floor are brighter than others, to give a sense of the overall lighting in that area.

    The primitives making up the sides and ground all have the same texture applied to them, but there is a second one, called a light map, that is blended with the texel values before they're mapped to the frame pixels. In the days of Quake, light maps were pre-calculated and made by the game engine, and used to generate static and dynamic light levels.

    The advantage of using them was that complex lighting calculations were done to the textures, rather than the vertices, notably improving the appearance of a scene and for very little performance cost. It's obviously not perfect: as you can see on the floor, the boundary between the lit areas and those in shadow is very stark.

    In many ways, a light map is just another texture (remember that they're all nothing more than 2D data arrays), so what we're looking at here is an early use of what became known as multitexturing. As the name clearly suggests, it's a process where two or more textures are applied to a primitive. The use of light maps in Quake was a solution to overcome the limitations of Gouraud shading, but as the capabilities of graphics cards grew, so did the applications of multitexturing.

    The 3Dfx Voodoo, like other cards of its era, was limited by how much it could do in one rendering pass. This is essentially a complete rendering sequence: from processing the vertices, to rasterizing the frame, and then modifying the pixels into a final frame buffer. Twenty years ago, games performed single pass rendering pretty much all of the time.

    This is because processing the vertices twice, just because you wanted to apply some more textures, was too costly in terms of performance. We had to wait a couple of years after the Voodoo, until the ATI Radeon and Nvidia GeForce 2 graphics cards were available before we could do multitexturing in one rendering pass.

    These GPUs had more than one texture unit per pixel processing section (aka, a pipeline), so fetching a bilinear filtered texel from two separate textures was a cinch. That made light mapping even more popular, allowing for games to make them fully dynamic, altering the light values based on changes in the game's environment.

    But there is so much more that can be done with multiple textures, so let's take a look.

    It's normal to bump up the height

    In this series of articles on 3D rendering, we've not addressed how the role of the GPU really fits into the whole shebang (we will do, just not yet!). But if you go back to Part 1, and look at all of the complex work involved in vertex processing, you may think that this is the hardest part of the whole sequence for the graphics processor to handle.

    For a long time it was, and game programmers did everything they could to reduce this workload. That meant reaching into the bag of visual tricks and pulling off as many shortcuts and cheats as possible, to give the same visual appearance of using lots of vertices all over the place, but not actually use that many to begin with.

    And most of these tricks involved using textures called height maps and normal maps. The two are related in that the latter can be created from the former, but for now, let's just take a look at a technique called bump mapping.

    Bump mapping involves using a 2D array called a height map, that looks like an odd version of the original texture. For example, in the above image, there is a realistic brick texture applied to 2 flat surfaces. The texture and its height map look like this:

    The colors of the height map represent the normals of the brick's surface (we covered what a normal is in Part 1 of this series of articles). When the rendering sequence reaches the point of applying the brick texture to the surface, a sequence of calculations take place to adjust the color of the brick texture based on the normal.

    The result is that the bricks themselves look more 3D, even though they are still totally flat. If you look carefully, particularly at the edges of the bricks, you can see the limitations of the technique: the texture looks slightly warped. But for a quick trick of adding more detail to a surface, bump mapping is very popular.

    A normal map is like a height map, except the colors of that texture are the normals themselves. In other words, a calculation to convert the height map into normals isn't required. You might wonder just how can colors be used to represent an arrow pointing in space? The answer is simple: each texel has a given set of r,g,b values (red, green, blue) and those numbers directly represent the x,y,z values for the normal vector.

    In the above example, the left diagram shows how the direction of the normals change across a bumpy surface. To represent these same normals in a flat texture (middle diagram), we assign a color to them. In our case, we've used r,g,b values of (0,255,0) for straight up, and then increasing amounts of red for left, and blue for right.

    Note that this color isn't blended with the original pixel - it simply tells the processor what direction the normal is facing, so it can properly calculate the angles between the camera, lights and the surface to be textured.

    The benefits of bump and normal mapping really shine when dynamic lighting is used in the scene, and the rendering process calculates the effects of the light changes per pixel, rather than for each vertex. Modern games now use a stack of textures to improve the quality of the magic trick being performed.

    This realistic looking wall is amazingly still just a flat surface -- the details on the bricks and mortar aren't done using millions of polygons. Instead, just 5 textures and a lot of clever math gets the job done.

    The height map was used to generate the way that the bricks cast shadows on themselves and the normal map to simulate all of the small changes in the surface. The roughness texture was used to change how the light reflects off the different elements of the wall (e.g. a smoothed brick reflects more consistently that rough mortar does).

    The final map, labelled AO in the above image, forms part of a process called ambient occlusion: this is a technique that we'll look at in more depth in a later article, but for now, it just helps to improve the realism of the shadows.

    Texture mapping is crucial

    Texturing is absolutely crucial to game design. Take Warhorse Studio's 2019 release Kingdom Come: Deliverance -- a first person RPG set in 15th century Bohemia, an old country of mid-East Europe. The designers were keen on creating as realistic a world as possible, for the given period. And the best way to draw the player into a life hundreds of years ago, was to have the right look for every landscape view, building, set of clothes, hair, everyday items, and so on.

    Each unique texture in this single image from the game has been handcrafted by artists and their use by the rendering engine controlled by the programmers. Some are small, with basic details, and receive little in the way of filtering or being processed with other textures (e.g. the chicken wings).

    Others are high resolution, showing lots of fine detail; they've been anisotropically filtered and the blended with normal maps and other textures -- just look at the face of the man in the foreground. The different requirements of the texturing of each item in the scene have all been accounted for by the programmers.

    All of this happens in so many games now, because players expect greater levels of detail and realism. Textures will become larger, and more will be used on a surface, but the process of sampling the texels and applying them to pixels will still essentially be the same as it was in the days of Quake. The best technology never dies, no matter how old it is!

    Sunday, December 1, 2019

    Professor uses magic tricks to teach students math

    By IBT Staff Reporter 06/03/09 AT 4:33 PM

    A professor from a British university has created a new way to help students to learn math by teaching them magic tricks.

    Professor Peter McOwan from Queen Mary's School of Electronic Engineering and Computer Science of the University of London has produced a series of videos entitled 'Maths in Magic' and 'Hustle' in conjunction with More Maths Grads (MMG).

    MMG is a three year project that aims to increase the number of students studying mathematics and encourage participation from groups of learners who have not traditionally been well represented in higher education.

    "It's fascinating how many great magic tricks and more worryingly con tricks work using hidden mathematical principles", explained Professor McOwan.

    "The videos were made to help show how the power of maths can entertain and mystify, and how if we aren't careful can even part us from our hard earned cash."

    Below is a demo video: