Cracking Hitler's unbreakable code: How the Colossus computer helped beat the Nazis At the height of Nazi power during the Second World War, Hitler's communications with his High Command were protected by a code thought to be uncrackable. This was the breakthrough the Allied codebreakers at Bletchley Park in the UK needed, giving them vital clues to how the Lorenz cipher machine worked, and a way to ultimately crack the code. ![]() Tiny house building codes and zoning. What you need to know to beat them. SEE: Hacking the Nazis: The secret story of the women who broke Hitler's codes The few surviving veterans of Bletchley's WWII codebreaking process gathered in Bletchley Park recently to pay tribute to Captain Jerry Roberts, who played a key role in cracking the Lorenz cipher at Bletchley. In the video above, you can hear their recollections and why unpicking Lorenz was so vital to the Allied war efforts. It wasn't the day- to- day running of the German war machine, it would be plans to move army divisions about, that would take the Germans several weeks to organize. In the video, Margaret Bullen shares her memories of helping to wire up the computer, alongside Colossus operator Irene Dixon's reflections on the importance of their work. The Lorenz decrypts provided information that changed the course of the war in Europe and saved lives at critical junctures like the D- Day landings. After the war, General Eisenhower said that the intelligence gleaned at Bletchley had shortened the fighting by at least two years. ![]() Unlocking Emily’s World . It’s part of a sound- processing study comparing minimally verbal adolescents with high- functioning autistic adolescents who can speak, as well as normal adolescents and adults. The investigation is painstaking, because every study must be adapted for subjects who not only don’t speak but may also be prone to easy distraction, extreme anxiety, aggressive outbursts, and even running away. Seated before a computer, she watches as pictures of everyday items pop up on the screen, such as a toothbrush, a shirt, a car, and a shoe. When a computer- generated voice names one of these objects, Emily’s job is to tap the correct picture. Emily’s earlier pilot testing of this study showed that she understands more than 1. But today, she’s just not interested. Between short flurries of correct answers, Emily weaves her head, slumps in her chair, or flaps her elbows as the computer voice drones on—car. When one of the researchers tries to get Emily back on task, she simply taps the same spot on the screen over and over. Finally, she gives the screen a hard smack. The next session is smoother. Emily is given a kind of IQ test in which she quickly and (mostly) correctly matches shapes and colors, identifies patterns, and points out single items within increasingly complicated pictures of animals playing in the park, kids at a picnic, or cluttered yard sales.
Emily is minimally verbal, not nonverbal. She’ll say “car” when she wants to go for a ride or “home” when she’s out somewhere and has had enough. Sometimes she communicates with a combination of sounds and signs or gestures, because she has trouble saying words with multiple syllables. For instance, when she needs a “bathroom,” her version sounds like, “ba ba um,” but she combines it with a closed hand tilting 9. When she’s someplace she doesn’t want to be, she’ll ask to go to the bathroom five or six times.”The first word Emily ever said was “apple” when she was four years old. Said it, and ate it. It was amazing to me,” her dad recalls. The final item on the morning agenda is an EEG study, in which Emily must wear a net of moist electrodes fitted over her head while she listens to a series of beeps in a small, soundproof booth. The researchers have tried EEG with Emily twice before in pilot testing. The first time, she tolerated the electrode net. The second time, she refused. This time, with her dad to comfort her and a rewarding snack of gummi bears, Emily dons the neural net without protest. Emily sits with BU Research Assistant Briana Brukilacchio at the Center for Autism Research Excellence and watches one of her favorite movies, Frozen, while participating in a research study. The point of this study is to see how well Emily’s brain distinguishes differences in sound—a key to understanding speech. For instance, normally developing children learn very early, well before they can speak, to separate out somebody talking from the birds chirping outside the window or an airplane overhead. They also learn to pay attention to deviations in speech that matter—the word “cat” versus “cap”—and to ignore those that don’t—cat is cat whether mommy or daddy says it.“The brain filters out what’s important based on what it learns,” says Shinn- Cunningham. Some of this sound filtering is automatic, what brain researchers call “subcortical.” The rest is more complicated, a top- down process of organizing sounds and focusing the brain’s limited attention and processing power on what’s important. EEG measures electrical fields generated by neuron activity in different parts of the brain. There are 1. 28 tiny EEG sensors surrounding Emily’s head and upper neck. Each sensor is represented as a line jogging along on the computer monitor outside the darkened booth where Emily sits with her dad holding her hand, watching a silent version of her favorite movie, Shrek. Barbara Shinn- Cunningham, a professor of biomedical engineering at the College of Engineering, has collaborated with Frank Guenther, a professor of speech, language and hearing sciences at BU’s Sargent College of Health & Rehabilitation Sciences, to develop neural models of how brains understand and make speech that will help researchers learn more about children like Emily. Today’s experiment is focused on the automatic end of sound- processing. A constant stream of beeps in one pitch is occasionally interrupted by a higher- pitched beep. How will Emily’s brain respond? Most of the time, the 1. EEG lines are tightly packed as they move across the screen. However, muscle movements generate large, visible peaks and troughs in the signals when Emily blinks or lolls her head from side to side. Once, just after a gummi bear break, several large, concentrated spikes show her chewing. Shifts in attention are much more subtle, and the raw data will have to be processed before anything definitive can be said about Emily’s brain. The readout is time- coded with every beep, and the researchers will be particularly interested in the signals from the auditory areas in the brain’s temporal cortex, located behind the temples. The beep test has six five- minute trials. But, after about twenty minutes, Emily is getting restless. It’s been a long morning. She starts scratching at the net of sensors in her hair. She’s frustrated that Shrek is silent. The EEG signals start to swing wildly. From inside the booth, stomping and moans of protest can be heard. When the booth’s door is opened at the end of the fourth trial, Emily’s eyes are red. She’s crying. Her father and the researchers try to cajole her into continuing.“Just two more, Emmy,” her dad says. Emily will return to the center as the experiments move from beeps to words, and they can finish the last two trials then. All in all, it’s been a successful morning.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. Archives
October 2017
Categories |