Research and implementation of a scalable open source next generation multimedia search engine that will be able to search information stemming from the physical world.
The main goal of the SMART project is to research and implement a scalable open source next generation multimedia search engine that will be able to search information stemming from the physical world. The SMART multimedia search engine will be able to answer queries based on the intelligent collection and combination of sensor generated multimedia data, based on sensors and sensor processing algorithms that match the context at hand. The matching of the queries with sensor and sensor processing algorithms (notably audio and video processing algorithms) will be based on the sensors’ context and metadata (e.g., location, state, capabilities), as well as on the dynamic context of the physical world as the later is perceived by multimedia processing algorithms (such as face detectors, person trackers, classifiers of acoustic events, crowd analysis components and more). Furthermore, SMART will be able to leverage social network information in order to facilitate social queries over a multitude of sensor data.