View Students by College




View by Award




View Students by Year




Ben Hixon

College: Hunter College
Awards: National Science Foundation Graduate Research Fellowship, 2013

Just the Facts, Ma'am

Suppose you’re walking down the street, experience a pang of hunger and decide that only vegetarian sushi will satisfy you. Your smartphone pulls up 1,000 reviews. How can you find the nearest one with good service, affordable prices and, above all, great vegetarian sushi?

“Open IE, or open information extraction,” says Ben Hixon (Hunter College, B.A. in computer science, 2012). Now in a University of Washington doctoral program, Hixon won a $126,000 National Science Foundation Graduate Research Fellowship, the premier graduate research award in the science, technology, engineering and math (STEM) fields in 2013.

“In a normal search, you’re looking for key words, but in Open IE, you’re using facts,” he explains. Open IE automatically pulls facts from news stories, blogs and other text on the Internet and catalogs them in a database. Hixon is figuring out how to search the database.

For example, “if you have the sentence, ‘President Obama is in the White House,'” he says, “you can extract that Obama is the current president.”

Hixon’s interest dates to an undergraduate database class with Hunter professor Susan Epstein, whose interests range from machine learning to human-machine dialogue. “She passed around a sign-up sheet for people interested in research,” he says. She was building a dialogue system for blind people to query the Andrew Heiskell Braille and Talking Book Library” in Manhattan in collaboration with Rebecca Passonneau, director of Columbia University’s Center for Computational Learning Systems.

Over several semesters, Hixon attacked aspects of the library project. To help the library’s voice recognition system better understand authors’ names and book titles, he used machine learning to quantify similarities between phonemes, the smallest semantically significant units in spoken language, to better associate spoken with written words.

“We could say, ‘Lunch sounds closer to launch that it does to bunch,'” he says. “The phonemes corresponding to the ‘uh’ and ‘aw’ sounds are more similar than the phonemes corresponding to the ‘l’ and ‘b’ sounds. If the computer thinks I said ‘naked launch’ but it knows that ‘launch’ and ‘lunch’ are very similar, then it’s more likely to make the appropriate correction and give me the book ‘Naked Lunch.'”

He made these findings publicly available after delivering a paper, co-authored with another student, in Italy.

In a follow-up project, he used phoneme similarities in a novel algorithm for voice search, which matches a spoken query to an item in a large database, and evaluated the algorithm on a set of gender-balanced, spoken book titles.

Working with Passonneau during a 2011 Research Experience for Undergraduates program, Hixon devised a way to measure the semantic specificity of a request made in a human-machine dialogue. And Hixon worked with graduate student Eric Osisek on a dialogue-based system that can recommend books to Heiskell Library users. Essentially, it treats books as nodes on a graph and finds clusters of books similar to those that the patron has previously requested.

In the summer after graduation, he returned to Passonneau’s lab. He developed an open dialogue manager that automatically combs a database looking for terms that would be most useful in managing dialogue about that database. He was to present their paper on this research at the June 2013 conference of the North American Association for Computational Linguistics.

Meanwhile, his research with University of Washington professor Oren Etzioni, who pioneered open information extraction, has shifted from voice recognition to “conversational search.”

“The structured knowledge obtained via Open IE is conducive to conversational interaction, while unstructured keywords don’t lend themselves to conversation,” Hixon says. “How would it feel to have a conversation with another person using only keywords? Not too pleasant.

“The idea is you could go to Google or Bing and enter a long question. If Google doesn’t understand what you want, a chat box will pop up asking you questions. Since we are not there in terms of voice recognition, I’m focusing on text-based search in a Web browser, but eventually we would like to move to voice, because when you’re walking around and looking for that sushi restaurant, you don’t want to type or stare at your screen.”