As humans, we’re accustomed to thinking creativity and intuition are unique to our species, but at the 2018 Re-Work Deep Learning Summit in Boston, presenters described ways artificial intelligence increasingly resembles us in those and other attributes.

That doesn’t mean we should fear it, according to Tom Wilde, CEO of Indico, a company that uses AI for document analysis.  “It’s more like a bionic arm as opposed to a truckload of robots being brought in,” he said.

“Deep Learning” in Art

At the forefront of efforts to create a more “human-like” artificial intelligence is “deep learning”, a branch of AI in which machines learn new information in a series of feedback loops that mirror human logic.

It’s the technology behind what Michael Sollami, Salesforce’s Lead Data Scientist, calls “computational photography.” “We’re basically rivalling human artists’ creativity and surpassing it,” he said, showing several slides of stunning, nuanced and innovative images. Does that mean artists everywhere will soon be out of work (even more than they already are)?

Not at all, Sollami insisted. We’ll just have to re-work our definitions of art to incorporate a new and higher baseline. “It’s like a prosthetic,” he offered.

Prosaic Programs

Miguel Angel Campo-Rembado, the head of Data Science and Analytics at Twentieth Century Fox, said the company hopes to offer similar assistance to screenwriters. His team is working on a form of “machine storytelling,” which he described as “scientifically dissecting the story into different parts … to help human storytellers craft the right story for the right people.”

It’ll be two or three years before those tools are ready for use in the writing process, he said, but in the meantime, they’re already helping guide decisions about which scripts get made into films.

It’s not likely AI will ever replace screenwriters altogether, Campo-Rembado said, because while computers are great at counting words, they can’t truly understand them.

They can do a pretty good job of faking it, though, according to several other presenters. By mapping emojis to text, for example, the MIT Media Lab’s “DeepMoji” can detect emotions and sarcasm in writing, even when the text has no emojis, graduate student Bjarke Felbo said.

Close by, MIT CSAIL is working on a different way for computers to “understand” language. Research Scientist David Harwath described how his team is mapping speech waveforms to images. Eventually, he said, they hope the images will act like a “Rosetta Stone” allowing for simultaneous translation, even among languages with no written form.

When it comes to writing things down, AI still has a long way to go.  As Chief Scientist at Phrasee Neil Yager put it, generating text is easy; generating text with meaning is hard. His company uses deep learning to generate clickable email subject lines. While their model saves time and helps brands deliver more effective marketing campaigns, he said, all language is approved by a human before being sent to prevent occasionally awkward phrasings from going to customer inboxes.

AIRG CEO queries The Matrix, an analog alternative to deep learning.
Bill Aronson, CEO of Artificial Intelligence Research Group, queries “The Matrix”, a blob of programmable matter sans software.

Reflective Robots

Speaking of awkward, try having a real conversation with Cortana, Siri or Alexa — the voice-activated virtual assistants from Windows, Apple and Amazon, respectively. The difficulty of doing so is why Qualcomm is using deep learning to create an “on device” virtual assistant whose responses are more immediate, personal, and conversational, according to the company’s Senior Director of Engineering, Chris Lott. He said the goal is to create an assistant who discerns its user’s intent.

The ability to comprehend intent is especially critical for self-driving cars, Yibiao Zhao said. The CEO of iSee described how his company aims to make computers as sensitive as humans to social cues and physics. The result will be an autonomous vehicle that can “manage a risk like a human and be able to negotiate with other cars and humans in the environment.”

Brilliant Blobs

The summit’s most shocking presenter was probably Bill Aronson, CEO of AI Research Group. He introduced an approach to computing he said was even more advanced than deep learning.

“We’re taking a blob of matter, adding electrodes to it, and making it do computation,” he said. “There is no software; the actual state of the material is the program.”

He wouldn’t say what the blob was made from — only that it was “completely unhackable, unclonable because of the nature of the physics of the material.” It’s also strong enough to be sent into outer space, he said.

Why dispense with software?

“We have spent the last 40, 50 years trying to develop artificial intelligence using digital and silicone, but our brains are not digital, and they’re certainly not silicone. We think in analog; we don’t think in digital,” he explained. “We have to start creating analog devices if we want to start to replicate the way humans think.”

That doesn’t mean digital AI is dead, he added. As other speakers promised their digital tools would assist rather than replace analog humans, Aronson assured the audience his company’s analog device would augment digital tools.

When asked whether his company’s device could enable the creation of humanoid robots, Aronson said to an audience of some AI’s brightest, “I think the applications of this are going to be so much wider than any of us in this room could possibly conceive of.”

Featured image: CCO by Comfreak.