BIG DATA
Computer Scientist Takes Aim at Improved Indexing of Digital Information
Inflation's got nothing to do with it. Since the beginning of time, a picture has always been worth more than a thousand words. But in this age of information proliferation, that reality is the taproot of a vexing problem that Zhongfei "Mark" Zhang, an assistant professor of computer science at Binghamton University, is determined to help solve. Zhang and his students are developing robust data mining techniques to automatically build up money laundering crime models from scanning large collections of textual documents. A generated model indicates those involved in a specific money laundering crime, and helps detail the relationships between the individuals involved in the crime (e.g., who is in charge of the group), as well as all the activities the individuals have engaged in as part of the crime. From personal and commercial digital image libraries and multimedia databases to data mining programs and high-tech security and defense surveillance, our need for more efficient and more effective ways to index, retrieve, manipulate and understand complex video or images is pressing. Verbal cues--whether keywords or multiple page abstracts--are just not cut out for the job and neither coercion nor clichés can change that fact, Zhang said.
"It's very difficult to capture the entire content of a picture with any number of words," Zhang said. "And you certainly can't capture an image with a single word or with a few key words. In terms of effectiveness, this is not a good approach."
Take, for example, a picture in which a couple stands in front of their house. Behind their house is a large palm tree. Several other shrubs, trees and plants are in the front yard, which is enclosed by a fence. In the background, there are hills and clouds. Imagine what the people are wearing, what they are doing, that one is white and one is black, and the problem comes into sharper focus.
"Looking at such an image, what are the keywords?" Zhang asked. "Couple? Man? Woman? House? Palm? Fence? It's almost impossible to use words to describe the net content of the image, including its shapes, colors and textures. It takes the power of extensive computer analysis and processing to manage this kind of task.
Still, so far, there is no commercial product available that can index large-scale imagery or non-textual databases in their own modality. As far as I know, all or almost all commercially available multimedia database programs work at the keyword level."
That's why Zhang is involved in a number of research projects that seek to understand and optimize the indexing, retrieval and use of images based on algorithms that rely on the semantics of the images themselves. His work is funded by industry and defense agencies with grants that are expected to reach more than $200,000 by year's end.
An expert in image understanding and multimedia indexing and retrieval, Zhang has worked in the recent past on image indexing and retrieval issues with Kodak Corp., and on issues of multimedia indexing and retrieval of patient records with Upstate Medical Center. Those active collaborations evaporated as funding for the projects dried up, but along with his current funded research projects, Zhang continues to pursue the research as well as another currently unfunded project: facial recognition. His progress on all fronts is impressive.
Earlier this month Zhang filed an invention disclosure on his prototype system for improved content-based image retrieval with his student Ruofei Zhang. The system involves the use of a novel fuzzy logic-based indexing scheme, as well as a novel user relevance feedback algorithm. Based on semantic similarity within the images themselves, it can rapidly and effectively identify and retrieve images from very large data bases or the Internet. The system which Zhang has dubbed "FAST" --for Fast And Semantic-Tailored image retrieval-- also "learns" from user feedback about the relevance of images retrieved. In other words, of the images it returns, you tell it which ones bore the closest resemblance to what you were looking for, and it continues to improve its performance with each new search based on your feedback.
Zhang has already been approached about the prototype by the American Museum of Natural History in New York City, where databases of hundreds of thousands of images could become more accessible through better indexing and retrieval.
He is also currently working with funding from the US Air Force on a project to develop a system to automatically recognize independent motion directly in the compressed surveillance video, particularly video shot from unmanned surveillance aircraft such as the Predator. (See related story.)
When video is shot from a moving plane, extensive analysis is needed to detect which if any elements in a given frame or set of frames are moving independently. Currently that analysis requires decompression of compressed video followed by tedious inspection of large, archived image databases by human image analysts.
Zhang is developing a technology that will automatically detect independent motion in compressed video streams from an archived database or even directly from the remote sensor hook up in real time. It will improve by orders of magnitude on the efficiency of the current process.
"If you have a still camera and you want to detect motion, all you have to do is detect the difference between two individual frames," Zhang said. "However, in many scenarios, especially in military surveillance, typically the camera is also in motion, so everything is in motion from frame to frame. We have developed a preliminary prototype system to robustly and automatically detect independent motion directly from the compressed video domain."
But Zhang's most challenging project to date might be his new work on automatic model generation in an area called information fusion. Preliminary work on this project was funded jointly by the U.S. Air Force and the National Institute of Justice. The project aims at automatic detection of money laundering schemes.
"This is a completely new research problem," Zhang said. "I used to work on computer vision and image understanding focusing on imagery and video data. Now my research horizon is extending to incorporate the area of data mining in general, and in this project we are focusing on the text data modality in particular."
To investigate money laundering crimes, Zhang's research team has access to a significant amount of textual data, ranging from court reports, financial transaction records and bank statements to personal communications and news reports.
Zhang and his students are developing robust data mining techniques to automatically build up money laundering crime models from scanning such large collections of textual documents. A generated model indicates those involved in a specific money laundering crime, and helps detail the relationships between the individuals involved in the crime (e.g., who is in charge of the group), as well as all the activities the individuals have engaged in as part of the crime.
"Current investigation techniques require at least several months' effort to build up the model because the model is generated manually," Zhang said. The prototype system Zhang's group has developed only takes a few minutes to generate a money laundering crime model and so holds great promise in future money laundering investigation, prosecution, and prevention.
"The government is extremely interested in automating, or at least semi-automating this investigation process to significantly save the man power in law enforcement agencies and to significantly expedite the crime investigation and prosecution time," Zhang said.
"Considering the threat of global terrorism, preventing money laundering becomes ever more important to stop the financing of terrorist activities," Zhang said, "and I can tell you, though, that this research has great potential."