Wednesday, October 23, 2019

Can Artificial Intelligence “Think”?

Sci-fi and science can’t seem to agree on the way we should think about artificial intelligence. Sci-fi wants to portray artificial intelligence agents as thinking machines, while businesses today use artificial intelligence for more mundane tasks like filling out forms with robotic process automation or driving your car. When interacting with these artificial intelligence interfaces at our current level of AI technology, our human inclination is to treat them like vending machines, rather than to treat them like a person. Why? Because thinking of AI like a person (anthropomorphizing) leads to immediate disappointment. Today’s AI is very narrow, and so straying across the invisible line between what these systems can and can’t do leads to generic responses like “I don’t understand that” or “I can’t do that yet”. Although the technology is extremely cool, it just doesn’t think in the way that you or I thi nk of as thinking. 

POTSDAM, GERMANY - JANUARY 19: Visitor looks at of the Sculpture of Auguste Rodin, The Thinker, ... [+] during a press preview in the Barberini Museum on January 19, 2017 in Potsdam, Germany. The art museum will have a permanent collection from Plattner's foundation that focuses on 20th century German art, both from West and East Germany. The museum opens to the public on January 23 with an exhibition on Impressionist landscapes. (Photo by Michele Tantussi/Getty Images)

Getty Images

Let’s look at how that “thinking” process works, and examine how there are different kinds of thinking going on inside AI systems. 

First, let me convince you that thinking is a real thing. Putting aside the whole conversation about consciousness, there is a pretty interesting philosophical argument that thinking is just computing inside your head. As it turns out, this has been investigated, and we can make some conclusions beyond just imagining what thinking might really be. In the book “Thinking Fast and Slow”, Nobel laureate Daniel Kahneman talks about the two systems in our brains that do thinking: A fast automated thinking system (System 1), and a slow more deliberative thinking system (System 2). Just like we have a left and right brain stuck in our one head, we also have these two types of thinking systems baked into our heads, talking to each other and forming the way we see the world. And so thinking is not as much about being right, as it is a couple of ways for making decisions. Today's AI systems learn to think fast and automatically (like System 1), but artificial intelligence as a scie nce doesn’t yet have a good handle on how to do the thinking slow approach we get from System 2. Also, today’s AI systems make the same sorts of mistakes as System 1, where biases, shortcuts, and generalizations get baked into the “thinking” machine during learning. With today’s AI, there is no deliberative step by step thinking process going on. For example, How can AI “think”, when a major component of what thinking is all about isn’t ready for primetime?

Now that we have a bit more definition about what thinking is, how can we make more human-like artificial intelligence? Maybe representing feedback loops will get us to a sort of thinking machine like System 2. Well, as it turns out, we have not cracked that yet. AI models don’t contain common knowledge about the world. For example, I recall Yann Lecun, a “founding father” of modern AI,  gave an example sentence “He walked through the door” and pointed out that today’s AI models can’t decide what this means. There is a silly interpretation where we can conclude that a person crashed through a door like a superhero, smashing it to pieces. There is another interpretation where either the door was open or the person opens the door to walk through the doorway. Unfortunately, without common knowledge, you don’t really know which situation is more likely. This shows us that even “thinking fast” situations can go poorly using the tools w e have available today.

We live in a world where fast thinking AI is the norm, and the models are slowly trained on huge amounts of data. The reason you can’t make a better search engine than Google is not the secrecy of their search algorithms. Rather, it is the fact that they have data you don’t have, from excellent web crawlers to cameras on cars driving around your neighborhood. Currently, the value in AI is the data, and the algorithms are mostly free and open source. Gathering masses of data is not necessarily enough to ensure a feature works. Massive efforts at human labor are often required. In the future, thinking algorithms that teach themselves may themselves represent most of the value in an AI system, but for now, you still need data to make an AI system, and the data is the most valuable part of the project. 

Thinking is not easily separated from the human condition, but we humans are also far from perfect. We may be smart on average, but as individuals, we are not built to do statistics. There's some evidence for the wisdom of crowds, but a crowd holding pitchforks and torches may change your mind. As it turns out, we are adapted through the generations to avoid being eaten by lions, rather than being adapted to be the best at calculus. We humans also have many biases and shortcuts built into our hardware. It’s well documented. For example, correlation is not causation, but we often get them mixed up. A colleague of mine has a funny story from her undergraduate math degree at a respected university, where the students would play a game called “stats chicken”, where they delay taking their statistics course until the fourth year, hoping every year that the requirement to take the course will be dropped from the program. 

Given these many limitations on our human thinking, we are often puzzled by the conclusions reached but our machine counterparts. We “think” so differently from each other. When we see a really relevant movie or product recommendation, we feel impressed by this amazing recommendation magic trick, but don’t get to see the way the magic trick is performed. And one is tempted to conclude that machine-based thinking is better or cleaner than our messy biological process, because it is build on so much truth and mathematics. In many situations that’s true, but that truth hides a dark underlying secret. In many cases, it is not so clear why artificial intelligence works so well. The engineering got a bit ahead of the science, and we are playing with tools we don’t fully understand. We know they work, and we can test them, but we don’t have a good system for proving why things work. In fact, there are some accusations even in respected academic circles (slid e 24, here) that the basic theory of artificial intelligence as a field of science is not yet rigerously defined. It is not just name-calling or jealousy being hurled by the mathematicians at the engineers. AI is a bunch of fields stuck together, and there really is a lack of connection in the field between how to make things work and proving why they work. And so the question about thinking and AI is also a question about knowledge. You can drive a car if you don’t know exactly how it works inside, and so maybe you can think, even if you don’t know why your thinking works.

Assuming we don’t have a concrete theory underlying the field of artificial intelligence, how can engineers get anything done? Well, there are very good ways to test and train AI models, which is good enough for today’s economy. There are many types of AI, including supervised learning, unsupervised learning, reinforcement learning, and more. Engineers don’t tend to ask questions like “is it thinking?”, and instead ask questions like “is it broken?” and “what is the test score?”

Supervised learning is a very popular type of artificial intelligence that makes fast predictions in some narrow domain. The state-of-the-art machinery for doing supervised learning on large datasets is feed-forward deep neural networks. This type of system does not really think. Instead, it learns to pick a label (for classification) or a number (for regression) based upon a set of observations. The way decisions are baked into neural networks during “learning” is not obvious without a strong validation step. More transparent AI models have been around for a long time, for example, in areas such as game theory for military planning. Explicit models like decision trees are a common approach to developing an interpretable AI system, where a set of rules is learned that defines your path from observation to prediction, like a choose your own adventure story where each piece of data follows a path from the beginning of the book to the conclusion. 

Another type of artificial intelligence called reinforcement learning involves learning the transition from one decision to the next based on what’s going on in the environment and what happened in the past. We know that without much better “environment” models of the world, these approaches are going to learn super slowly, to do even the most basic tasks. Systems that learn to solve problems this way rely heavily on accurate models of how the world works. When dealing with a problem related to humans, they need lots of data on what those humans do, or like, or think. For example, you can’t learn to generate amazing music without data on what humans like to listen to. In a game playing simulator an AI model can play against itself very quickly to get smart, but in human-related applications the slow pace of data collection gums up the speed of the project. And so in a broad sense, the AI field is still under construction at the same time as we are plugging lots of things into it.

Jen-Hsun Huang, president and chief executive officer of Nvidia Corp., speaks during the company's ... [+] event at the 2019 Consumer Electronics Show (CES) in Las Vegas, Nevada, U.S., on Sunday, Jan. 6, 2019. CES showcases more than 4,500 exhibiting companies, including manufacturers, developers and suppliers of consumer technology hardware, content, technology delivery systems and more. Photographer: David Paul Morris/Bloomberg

© 2018 Bloomberg Finance LP

Regardless of the underlying technical machinery, when you interact with a trained artificial intelligence model in the vast majority of real-life applications today, the model is pre-trained and is not learning on the fly. This is done to improve the stability of your experience, but also hides the messiness of the underlying technology. The learning tends to happen is a safe space where things can be tested, and you experience only the predictions (also called inference) as a customer of the AI system.  

Despite the hype, AI models that think like we do are not coming around the corner to exceed humanity in every way. Truly thinking machines are definitely worthy of research, but they are not here just yet. Today, AI models and human analysts work side-by-side, where the analyst gives their opinion and is assisted by an AI model. It is useful to think about more general mathematical models like rainfall estimation and sovereign credit risk modeling to think about how mathematical models are carefully designed by humans, encoding huge amounts of careful and deliberative human thinking. The practice of building AI systems involves a lot of reading and creativity. It's not just coding away at the keyboard. 

I feel that AI software developers gradually build a sense for how to think about what an AI model is doing, and it ins’t “thinking”. I wanted to get some input from someone unrelated to me in the artificial intelligence field, to see if they feel the same way. Through the CEO of DTA, I set up a talk with Kurt Manninen about his work on an AI product called AstraLaunch. I asked Kurt a lot of technology questions, leading up to the question “Does the system think like people do?”

AstraLaunch is a pretty advanced product involving both supervised and unsupervised learning for matching technologies with company needs on a very technical basis. A complicated technology like this is a good area to be thinking about “thinking”. The system has an intake process that leads into a document collection stage, and then outputs a chart of sorted relevant documents and technologies. What I wanted to understand from Kurt is the way he thinks about what the matching technology. Is the system thinking when it maps the needs of NASA to the technology capabilities of companies? When diagnosing an incorrect prediction, does he think about the model as making a mistake, or is the origin of the mistake with the model maker and/or the data?

Kurt’s answer was really close to what I expected from my own experience. Technology like AstraLaunch involves humans and AI models working together to leverage the strength information processing approach. But, Kurt felt strongly, as I do, that bugs in AI models are the fault of people, not the model. An AI developer can see where the training wasn’t set up properly to understand language, or vocabulary, or where the dataset collection went wrong, etc.

Returning to the original question about artificial intelligence and thinking, I think we can solidly conclude that these systems don’t do thinking at all. If we only have fast and automatic (System 1) artificial intelligence to work with, can we think of an AI model as a gifted employee that thinks differently about the world? Well, no. AI will probably cheat if the training is unmanaged, and so it is a lazy, deceptive employee. It will use the easy way out to get the highest score on every test, even if the approach is silly or wrong. As we try and build a “System 2” that thinks more like us, we need to remember that thinking is not about passing a test. Instead consider this quote:

The test will last your entire life, and it will be comprised of the millions of decisions that, when taken together, will make your life yours. And everything, everything, will be on it.

John Green

Disclosure: I have no financial interest in any of these companies or products.

Tuesday, October 22, 2019

Thursday, October 3, 2019

Stanford professor’s mathematical surprises from phenomena of daily life

May 2, 2018Stanford professor's mathematical surprises from phenomena of daily life

Tadashi Tokieda is known for developing and sharing tricks and toys that question our assumptions about math and physics – a passion that's grown from his pursuit of fresh knowledge and love of magic.

  • Facebook
  • Twitter
  • Email
  • It wasn't until Tadashi Tokieda, now a professor of mathematics at Stanford University, was in his twenties that he began to study math. Growing up in Japan, Tokieda was a painter, and later in France he was a classical philologist – an expert in ancient languages. While in a mathematics program at Princeton University, he broadened his studies to include physics and other phenomena of the natural world.

    Go to the web site to view the video.

    Video by Numberphile

    Mathematics Professor Tadashi Tokieda demonstrates how angular momentum comes into play when mastering the kendama, a Japanese skill toy.

    Eventually, Tokieda combined all of this with his long-standing interest in magic, and videos of his resulting tricks have attracted millions of views online.

    At the root of Tokieda's unusual career path and varied interests is a deep appreciation for the value of surprise.

    "To me, surprise means that I used to assume something – to think something was absolutely true – and then it turns out to be wrong," said Tokieda. "I'm really hungry for surprises because each one makes us ever-so-slightly but substantially smarter."

    Rather than building on science that has a long legacy and is often at the edge of breakthrough – what he calls "science in flower" – Tokieda prefers pursuing answers amidst the unknown – "science in bud." He likes to share his discoveries with others, often while he's still exploring what he's found. For this purpose, Tokieda has collected and fashioned over 200 toys and tricks to model the surprises he studies.

    Compared to typical playthings, Tokieda's toys are common objects that, when manipulated or thought about in creative ways, do something unexpected. These are not to be mistaken for oversimplified teaching implements but rather instruments of surprise that are designed to encourage a collaborative discovering experience.

    From magic to math

    A memorable exchange between a professor and student during the first math lecture Tokieda ever attended in part inspired his career path.

    Go to the web site to view the video.

    Video by Numberphile

    Mathematics Professor Tadashi Tokieda demonstrates an unexpected way to inflate a paper balloon.

    "I got the surprise of my life when a student suggested that something didn't seem right and the professor looked at it for a few seconds and said, 'Ah, yes. I'm wrong. You are right,'" recalled Tokieda. "To hear a dignified adult say that and just move on? It was astonishing!"

    Tokieda maintains that deep appreciation for the privilege of being wrong, considering every mistake or overturned assumption an opportunity for learning and discovery. For this reason, he gravitated toward studying familiar phenomena and objects, which are particularly ripe for revelation because we accumulate a lifetime of assumptions about how they work and why. Dealing in these also helps non-scientists, colleagues in other fields, and friends and family share in Tokieda's experiences of discovery.

    "I decided that each time I figured out something or wrote a paper, I would design a little tabletop experiment which would show – if only partially – the fun and the surprise that I experienced when I was doing the research," said Tokieda.

    His collection made the recent journey overseas from the University of Cambridge to its new home on the shelves of his Stanford office. Some of the toys and tricks can be seen in action in the videos produced by Numberphile. This August in Rio de Janeiro, Tokieda will give one of the public lectures at the International Congress of Mathematicians on a particularly intriguing selection of these toys and the theme of "singularity" that emerges from them.

    Inspiring science

    Many of Tokieda's magic tricks require only everyday items, such as paper, rubber bands and coffee mugs. Other parts of his repertoire are extensions or examinations of toys that already exist, including a penguin that rocks down a wooden ramp, spinning tops and a kendama, the traditional Japanese wooden skill toy where the player attempts to swing a ball onto a spike.

    Tokieda prefers tricks that can build on each other and moments of wonder that take time to perfect. The magic teaches him as well, with new curiosities occasionally leading to publications.

    No matter who is in Tokieda's audience as he performs a trick – whether a family member, a student or a renowned mathematician – he hopes to elicit the same reactions of astonishment and curiosity. And even if he's done this trick a hundred times before, he is always looking for new surprises and seeking opportunities to make himself and others ever-so-slightly but substantially smarter.

    Wednesday, October 2, 2019

    In the art of mathematics, work is play and tricks are the trade

    Flipping through a deck of cards in the library as her friends and classmates slough through books and papers means that, sometimes, people want answers from Carolyn Chen.

    "I usually have to reassure my friends that, yes, I'm being productive," said Chen, who is among 15 freshmen in a seminar at Princeton University where work is play and tricks are the trade. "The Mathematics of Magic Tricks and Games" explores the mathematical principles behind games and magic tricks. Students then use those principles to create and master their own tricks and games.

    "Assignments for this class never feel like 'homework,'" said Chen, who is interested in pursuing electrical engineering or computer science. "I spend a great portion of my Saturdays and Sundays playing around with my cards, trying to reinforce and extend what we learned in class. I always practice my tricks on my friends and roommate, and thankfully they also think it's fun."

    "Fun" is precisely the impression of mathematics the class (designated the Richard L. Smith '70 Freshman Seminar) is intended to leave with students, said Princeton mathematics professor Manjul Bhargava, who teaches the class. As an undergraduate at Harvard University, Bhargava learned about the mathematical theory behind magic from his adviser, magician-cum-mathematician Persi Diaconis, whose research focus includes coin-flipping, card-shuffling and other problems of randomness.

    Students not only learn the mathematical principles behind tricks, but are encouraged to apply them to tricks of their own. Jamie Oliver (front) uses 52 cards to perform a variation of a trick the class learned that uses only 27. "Creating the tricks is a lot of fun for me and usually is the first homework assignment I will do on Tuesdays," Oliver said.

    Bhargava, now Princeton's Brandon Fradd, Class of 1983, Professor of Mathematics, figured those same lessons from his days as a student would be a good way to introduce young academics to a field widely considered difficult and inaccessible.

    "In grade school, mathematics is sometimes taught in a very robotic way of, here is the problem and here are the steps to solve it," Bhargava said. "As a result, sometimes it comes off as dry and students don't see the imaginative aspect. This course is meant to show that math is not a robotic science at all. It is an art and has a truly creative side. That's how mathematicians approach mathematics — creatively."

    In a recent class, Edgar von Ottenritter has a visitor select a card from a stack of 27, then name their favorite number. The visitor — or "victim," as the class calls the subjects of their tricks — draws a nine of clubs (unknown to von Ottenritter) and selects a favorite number of 12.

    The seminar is intended to engage freshmen in mathematics by capitalizing on their enthusiasm. "This class has shown me that there is a whole other side of math that I just wasn't exposed to before. I've acquired a new appreciation for math and its elegance," said Carolyn Chen (front). Seated next to Chen are (left to right) Jessie Liu and Lillian Xu.

    Von Ottenritter then lays the cards out in three columns of nine cards, all of it seemingly random. The victim selects a column. Von Ottenritter scoops up the rows and lays them out into columns again. This is done three times. At the end, a focused von Ottenritter flips over cards from the top of the deck until he gets to the 12th card — the nine of clubs.

    Von Ottenritter sighs with relief and bows as the class applauds. Bhargava explains that the trick relies on a formula based on 3 that essentially allows the mathematically inclined magician to figure out which column the victim's card is in and then mentally cycle through until it comes up again.

    "It looks like you're mixing up the deck, but after three turns you're back where you started," Bhargava told the class. "Once you've named your column, in theory you have enough information to guess which one has the victim's card."

    Despite the formulas and "perfect powers" (27 cards is ideal for the trick von Ottenritter performed), students in the seminar experiment with the encouragement of Bhargava and their classmates.

    Bhargava learned about the mathematical theory behind magic as an undergraduate at Harvard University. He figured those same lessons would be a good way to introduce Princeton freshmen to mathematics, a field widely considered difficult and inaccessible.

    Jamie Oliver, who plans to concentrate in chemistry, attempts the same trick but with 52 cards. This means the trick expands to five columns that alternate between containing 10 or 11 cards each. On top of that, his victim chooses a favorite number of 35, which he calls "an obnoxious number." Oliver explains to the class that while having five columns gives him more power to determine where the card is, the uneven piles can throw off his mental count, typically by one. A high favorite number such as 35 makes the likelihood of a missed guess greater.

    The class hovers over Oliver as he performs the involved trick. Some offer advice: "You should tell us a story," one says. The others laugh. Finally, Oliver counts out 35 cards. The last is the four of hearts. And it's his victim's card. More applause.

    Since the previous class, Oliver had spent about three hours developing the trick and practiced it 10 to 15 minutes a day, he explained. His decision to use 52 cards "was fairly whimsical," although a previous assignment was to expand any trick to use 52 cards.

    "Creating the tricks is a lot of fun for me and usually is the first homework assignment I will do on Tuesdays," Oliver said. "I put a good amount of practice into my tricks for class and usually bring a deck of cards to the dining hall once a day to show tricks to friends."

    Oliver has always had an interest in tactical games and been fascinated by magic, he said. Oliver had previously learned many of the mathematical concepts Bhargava discusses, but it wasn't until the seminar that he saw the connection, he said.

    "This class has shown me many cool applications for mathematical concepts I had learned in the past but never used," Oliver said.

    "Though I used to think math was really only used for the sciences, I now have an appreciation for the use of mathematics in art, music, games and magic," he said. "I am now more interested in math and its practical applications and can see myself taking more math classes down the line that I did not initially plan to take."

    Bhargava considers that type of enthusiasm as especially ripe in freshmen, and was a motivating factor in him creating the seminar, he said.

    "I read that the freshman seminar was one of their more formative experiences," Bhargava said. "I feel they should see the correct side of mathematics before they're seniors. The enthusiasm is really there as freshmen and this is a great time to catch them."

    Convinced that mathematics is more engaging than she thought, Chen again exemplified the reaction Bhargava hoped to prompt with his seminar.

    "I liked math in high school, but I never loved it," Chen said. "Now, I'm pretty sure I love it. This class has shown me that there is a whole other side of math that I just wasn't exposed to before. I've acquired a new appreciation for math and its elegance."

    Tuesday, October 1, 2019

    Numberplay: The Math Behind the Magic

     

    Numberplay Logo: NUM + BER = PLAY

    The mathemagician tosses a case holding a deck of cards to the first row of his one-thousand-strong audience. The deck is tossed back from person to person till it ends up close to the end of the hall, out of control of the performer. The final deck holder is asked to open the case, and give the cards a straight cut. The deck passes to a second spectator who also is also asked to cut the deck and pass it on. Finally, when the fifth spectator cuts, she is asked to keep the top card, and pass the deck back to the fourth spectator, who removes the current top card and passes it the third spectator and so on backward. When each of the five spectators has a card, the performer says, "Please make a mental picture of your card, and try to send it to me telepathically." As they do this, the performer concentrates, and then appears confused. "You're doing a great job, but there is too much information coming in all at once. Would those of you who have a red card stand up and concentrate?" The first and third spectators stand. The performer appears relieved. "That's perfect. I see a seven of hearts." One of the two spectators indicates that that is indeed his card. "And a jack of diamonds? Yes!" Now, focusing on the other three spectators, the mathemagician names all three black cards!

    Mathematician-magicians Ron Graham (left) and Persi Diaconis (right) and their bookMathematician-magicians Ron Graham (left) and Persi Diaconis (right) and their book

    Unlike some other magic tricks, this one does not rely on sleight-of-hand or accomplices. It is a purely mathematical trick, described by Persi Diaconis and Ron Graham in their new book "Magical Mathematics: The Mathematical Ideas That Animate Great Magic Tricks." As we have remarked before, there is something about stage magic (and incidentally, skepticism!) that captivates mathematicians who love puzzles. The list of amateur and professional magicians among puzzle masters is legion: we have in our columns featured the puzzles of Martin Gardner, Raymond Smullyan, Persi Diaconis, Fitch Cheney, Norman Gilbreath, and Colm Mulcahy among others. Diaconis and Graham have done a great job putting together the math behind magic tricks in their wonderful 9-by-11-inch 244-page book (which I'm holding between the fingers of my left hand in the above picture).

    So how is the card trick done? Not so fast! We first have some puzzles to solve…

    1. Consider the binary sequence 11100010. If you take a window of length 3 bits and move it along the sequence one bit at a time looping back to the beginning to complete a cycle, you will get all the possible sets of three bits, without repeats: 111, 110, 100, 000, 001, 010, 101 and 011. This is a truly magical property. Can you construct a different 8-bit sequence that has this property? Sequences that can be made identical to the above sequence by rotation don't count. How many such sequences are there?

    2. Can you find a constructive rule to create a window-4 sequence like the above? It should be 16 bits long. Here's something more difficult: how many such window-4 sequences are there? If you can get past that, try making a window-5 sequence. And the Holy Grail is to try and figure out a general formula for the number of window-n sequences with this property.

    3. The magic trick that we described involves a window-5 sequence of the above type. Can you come up with an elegant scheme to map playing cards to such a sequence? (Hint: the "deck" has only 32 cards. People don't notice – or complain!) Can you see how the unique property of the sequence allows the magic trick to be performed?

    These kind of sequences have a special name which will be revealed later. That is, if one of our smart readers doesn't discover it first! On account of its unique properties, these sequences have applications not just in magic tricks, but also in graph theory, robotic vision, cryptography and DNA analysis. Diaconis and Graham elucidate all this, and then move to magic variations that lead to the Gilbreath Principle, and further afield, to the Mandelbrot Set and Penrose tilings. The book is a virtuoso mathematical performance by two mathematician-performers!

    I invite readers to describe any mathematical magic that impressed you. And for our word challenge, how about constructing the longest cyclic window-3 sequence of legal three letter words without repeats? Here's an 8-word example: ALAMANAG, which gives ALA, LAM, AMA, MAN, ANA, NAG, AGA and GAL. (Here's a list of all legal 3-letter words.)

    For our word ladder, how about trying to go from UNIQUE to SEQUENCE? That should be challenging as well!

    The extended Numberplay word-ladder rules are given below.

    Numberplay Word Ladder Rules and Scoring 1. You can change a single letter in place, as in traditional word ladders — for example, SOAP to SOUP. 2. You can add or subtract a single letter in place to increase or decrease the length of a word — for example, MATH to MATCH or vice versa. 3. You can change a single letter and rearrange the letters of the word to give another word of the same length — for example, MUSEUM to SUMMED (with a U changed to D and then rearrangement). 4. All words in the ladder have to be unique.

    The aim is to make the shortest possible ladder. If two ladders are the same length, the following familiar Scrabble tie-break scoring will be used. For each step in the ladder, except the original words, add points for each letter as follows: Q,Z: 10; J,X: 8; K: 5; F,H,V,W,Y: 4; B,C,M,P:3; D,G: 2; All others: 1. The ladder with the highest tie-break score wins.

    If you want to post a diagram or graphic with your comment, or suggest a new puzzle, please e-mail it to numberplay@nytimes.com.