Along with the ever-growing computational power of mobile devices, mobile visual search has undergone an evolution in techniques and applications. A significant trend is low bit rate visual search, where compact visual descriptors are extracted directly over a mobile and delivered as queries rather than raw images to reduce the query transmission latency. In this article, we introduce our work on low bit rate mobile landmark search, in which a compact yet discriminative landmark image descriptor is extracted by using a location context such as GPS, crowd-sourced hotspot WLAN, and cell tower locations. The compactness originates from the bag-of-words image representation, with offline learning from geotagged photos from online photosharing websites including Flickr and Panoramio. The learning process involves segmenting the landmark photo collection by discrete geographical regions using a Gaussian mixture model and then boosting a ranking-sensitive vocabulary within each region, with "entropy"-based feedback on the compactness of the descriptor to refine both phases iteratively. In online search, when entering a geographical region, the code book in a mobile device is downstream adapted to generate extremely compact descriptors with promising discriminative ability. We have deployed landmark search apps to both HTC and iPhone mobile phones, accessing a database of a million scale images in typical areas like Beijing, New York, and Barcelona, and others. Our descriptor outperforms alternative compact descriptors (Chen et al. 2009; Chen et al., 2010; Chandrasekhar et al. 2009a; Chandrasekhar et al. 2009b) by significant margins. Beyond landmark search, this article will summarize the MPEG standarization progress of compact descriptor for visual search (CDVS) (Yuri et al. 2010; Yuri et al. 2011) toward application interoperability.
We propose to learn an extremely compact visual descriptor from the mobile contexts towards low bit rate mobile location search. Our scheme combines location related side information from the mobile devices to adaptively supervise the compact visual descriptor design in a flexible manner, which is very suitable to search locations or landmarks within a bandwidth constraint wireless link. Along with the proposed compact descriptor learning, a large-scale, contextual aware mobile visual search benchmark dataset PKUBench is also introduced, which serves as the first comprehensive benchmark for the quantitative evaluation of how the cheaply available mobile contexts can help the mobile visual search systems. Our proposed contextual learning based compact descriptor has shown to outperform the existing works in terms of compression rate and retrieval effectiveness.
AlGaN/GaN metal oxide semiconductor high electron mobility transistors (MOSHEMTs) with thick (>35 nm), high-kappa (TiO2/NiO), submicrometer-footprint (0.4 mu m) gate dielectric are found to exhibit two orders of magnitude in lower gate leakage current (similar to 1 nA/mm up to +3-V applied gate bias), higher I-MAX (709 mA/mm), and higher drain breakdown voltage, compared to Schottky barrier (SB) HEMTs of the same geometry. The maximum extrinsic transconductance of both the MOSHEMTs and the SBHEMTs with 2 x 80-mu m gate fingers is measured to be 149 mS/mm. The addition of the submicrometer-footprint gate oxide layer only results in a small reduction of the current gain cutoff frequency (21 versus 25 GHz, derived from S-parameter test data) because of the high permittivity (kappa approximate to 100) of the gate dielectric. This high-performance submicrometer-footprint MOSHEMT is highly promising for microwave power amplifier applications in communication and radar systems.