Salman UI Haq| Cloudtweaks

Mobile based cloud computing has been getting a forward nudge in the market swiftly because of the increasing benefits related to data manageability and user experience.

Probably the most alluring factor of mobile based cloud computing is introduction of Augmented reality(AR), a technology to virtualize the sensory perceptions of human beings into augmented computer generated form in order to give the look and feel of the real time environments. Using and deploying remote resources for mobile based cloud architectures have become easy and more efficient amid the implementation of Cloud-Mobile Convergence (CMC) paradigms which argue the adaptation of remote resources for the allocation of resources pool for mobile based cloud models.

CMC models have given rise to prodigious and state of the art AR based hardware equipped with optimized cloud computing capabilities, such as being offered by Google Glass. Google Glass uses the colossal AR based implementations to offer the enhanced user experience. The “Convergence of resources” paradigm which is used in Google Glass capitalizes the efficient and timely usage of resources no matter where they are located. Cloud computing experts are striving to provide the most intuitive and automated methods of human computer interaction. Scientific visualization, viewing satellite/aerial imagery and teleconferencing are some of the major hallmarks of the cloud based vision interaction systems that are combined in Google Glass to some extent.

With around billions of terabytes being updated in real time environments, Google Glass lets users to share data and gesture based activities through clouds. As the data storage demand around the world increases with every passing minute, there are some serious cloud storage and hosting issues that are likely to be faced by glass project and third party cloud host providers can harness the market space to offer the hosting services to improve the data sharing experience in Google glass. The Glass team is hinting to deploy third party cloud servers to offload some of the compute intensive calculations. Amazon Elastic Computing Cloud is probably the strongest candidate for this resource sharing pact keeping in view the previous collaborations.

Cloud computing promises to resolve the multifarious performance challenges like Cluster-driven multi million pixels processing in advanced and complex systems like Google Glass. Offloading motion based sensory perceptions like in Google Glass requires extra efficient algorithms behind the cloud servers in order to match the end user requirements.  Cloud’s parallel computing potential is helping the developers to implement the ultra-sensory processing at the back-end using the node analysis and comparisons techniques offered by AR based libraries. Because of the high performance cloud infrastructure, Google Glass offers the best real time interaction experience where gesture sensing and motion of eyes is processed within milliseconds to generate the input from cloud resources. The future holds a vast exposure and scope for AR based cloud mobile devices and Google Glass project is being considered as the stepping stone for the development in the cloud integration with wearable hardware and mobile platforms.


Leave a reply

Your email address will not be published. Required fields are marked *


Copyright © 2024 All rights reserved

Log in with your credentials

Forgot your details?