Nav: Home

Brown undergraduate researcher teaches robots handwriting and drawing

May 15, 2019

PROVIDENCE, RI [Brown University] -- An algorithm developed by Brown University computer scientists enables robots to put pen to paper, writing words using stroke patterns similar to human handwriting. It's a step, the researchers say, toward robots that are able to communicate more fluently with human co-workers and collaborators.

"Just by looking at a target image of a word or sketch, the robot can reproduce each stroke as one continuous action," said Atsunobu Kotani, an undergraduate student at Brown who led the algorithm's development. "That makes it hard for people to distinguish if it was written by the robot or actually written by a human."

The algorithm makes use of deep learning networks that analyze images of handwritten words or sketches and can deduce the likely series of pen strokes that created them. The robot can then reproduce the words or sketches using the pen strokes it learned. In a paper to be presented at this month's International Conference on Robotics and Automation, the researchers demonstrate a robot that was able to write "hello" in 10 languages that employ different character sets. The robot was also able to reproduce rough sketches, including one of the Mona Lisa.

Stefanie Tellex, an assistant professor of computer science at Brown and Kotani's advisor, says that what makes this work unique is the ability of the robot to learn stroke order from scratch.

"A lot of the existing work in this area requires the robot to have information about the stroke order in advance," Tellex said. "If you wanted the robot to write something, somebody would have to program the stroke orders each time. With what Atsu has done, you can draw whatever you want and the robot can reproduce it. It doesn't always do the perfect stroke order, but it gets pretty close."

Another remarkable aspect of the work, Tellex says, is how the algorithm was able to generalize its ability to reproduce strokes. Kotani trained his deep learning algorithm using a set of Japanese characters, and showed that it could reproduce the characters and the strokes that created them with around 93 percent accuracy. But much to the researchers' surprise, the algorithm wound up being able to reproduce very different character types it had never seen before -- English print and cursive, for example.

"We would have been happy if it had only learned the Japanese characters," Tellex said. "But once it started working on English, we were amazed. Then we decided to see how far we could take it."

Tellex and Kotani asked everyone who works in Tellex's Humans to Robots lab to write "hello" in their native languages, which included Greek, Hindi, Urdu, Chinese and Yiddish among others. The robot was able to reproduce them all with reasonable stroke accuracy.

"I feel like there's something really beautiful about the robot writing in so many different languages," Tellex said. "I thought that was really cool."

But the system's masterwork may be its copy of Kotani's Mona Lisa sketch. He drew his sketch on a dry erase board in Tellex's lab, and then allowed the robot to copy it -- fairly faithfully -- on the same board just below Kotani's original.

"It was early morning that our robot finally drew the Mona Lisa on the whiteboard," Kotani said. "When I came back to the lab, everybody was standing around the whiteboard looking at the Mona Lisa and asking me if [the robot] drew this. They couldn't believe it."

It was a big moment for Kotani because "it was the moment that our robot defined what's beyond mere printing." An ink jet printer can recreate an image, but it does so with a print head that goes back in forth building the image line by line. But this was the robot creating an image with human-like strokes, which to Kotani is "something much more humane and expressive."

Key to making the system work, Kotani says, is that the algorithm uses two distinct models of the image it's trying to reproduce. Using a global model that considers the image as a whole, the algorithm identifies a likely starting point for making the first stroke. Once that stroke has begun, the algorithm zooms in, looking at the image pixel by pixel to determine where that stroke should go and how long it should be. When it reaches the end of the stroke, the algorithm again calls the global model to determine where the next stroke should start, then it's back to the zoomed-in model. This process is repeated until the image is complete.

Both Kotani and Tellex say the work is a step toward better communication between people and robots. Ultimately, they envision robots that can leave Post-it Notes, take dictation or sketch diagrams for their human coworkers and collaborators.

"I want a robot to be able to do everything a person can do," Tellex said. "I'm particularly interested in a robot that can use language. Writing is a way that people use language, so we thought we should try this."
-end-
Video: https://youtu.be/xX_LX6qeiYY

Brown University

Related Stroke Articles:

How to help patients recover after a stroke
The existing approach to brain stimulation for rehabilitation after a stroke does not take into account the diversity of lesions and the individual characteristics of patients' brains.
Kids with headache after stroke might be at risk for another stroke
A new study has found a high incidence of headaches in pediatric stroke survivors and identified a possible association between post-stroke headache and stroke recurrence.
High stroke impact in low- and middle-income countries examined at 11th World Stroke Congress
Less wealthy countries struggle to meet greater need with far fewer resources.
Marijuana use might lead to higher risk of stroke, World Stroke Congress to be told
A five-year study of hospital statistics from the United States shows that the incidence of stroke has risen steadily among marijuana users even though the overall rate of stroke remained constant over the same period.
We need to talk about sexuality after stroke
Stroke survivors and their partners are not adequately supported to deal with changes to their relationships, self-identity, gender roles and intimacy following stroke, according to new research from the University of Sydney.
More Stroke News and Stroke Current Events

Best Science Podcasts 2019

We have hand picked the best science podcasts for 2019. Sit back and enjoy new science podcasts updated daily from your favorite science news services and scientists.
Now Playing: TED Radio Hour

Erasing The Stigma
Many of us either cope with mental illness or know someone who does. But we still have a hard time talking about it. This hour, TED speakers explore ways to push past — and even erase — the stigma. Guests include musician and comedian Jordan Raskopoulos, neuroscientist and psychiatrist Thomas Insel, psychiatrist Dixon Chibanda, anxiety and depression researcher Olivia Remes, and entrepreneur Sangu Delle.
Now Playing: Science for the People

#537 Science Journalism, Hold the Hype
Everyone's seen a piece of science getting over-exaggerated in the media. Most people would be quick to blame journalists and big media for getting in wrong. In many cases, you'd be right. But there's other sources of hype in science journalism. and one of them can be found in the humble, and little-known press release. We're talking with Chris Chambers about doing science about science journalism, and where the hype creeps in. Related links: The association between exaggeration in health related science news and academic press releases: retrospective observational study Claims of causality in health news: a randomised trial This...