Novel Motion Anchoring Strategies for Wavelet-based Highly Scalable Video Compression

Nonfiction, Computers, Application Software, Computer Graphics, Science & Nature, Technology, Electronics
Cover of the book Novel Motion Anchoring Strategies for Wavelet-based Highly Scalable Video Compression by Dominic Rüfenacht, Springer Singapore
View on Amazon View on AbeBooks View on Kobo View on B.Depository View on eBay View on Walmart
Author: Dominic Rüfenacht ISBN: 9789811082252
Publisher: Springer Singapore Publication: April 3, 2018
Imprint: Springer Language: English
Author: Dominic Rüfenacht
ISBN: 9789811082252
Publisher: Springer Singapore
Publication: April 3, 2018
Imprint: Springer
Language: English

A key element of any modern video codec is the efficient exploitation of temporal redundancy via motion-compensated prediction. In this book, a novel paradigm of representing and employing motion information in a video compression system is described that has several advantages over existing approaches. Traditionally, motion is estimated, modelled, and coded as a vector field at the target frame it predicts. While this “prediction-centric” approach is convenient, the fact that the motion is “attached” to a specific target frame implies that it cannot easily be re-purposed to predict or synthesize other frames, which severely hampers temporal scalability. 

In light of this, the present book explores the possibility of anchoring motion at reference frames instead. Key to the success of the proposed “reference-based” anchoring schemes is high quality motion inference, which is enabled by the use of a more “physical” motion representation than the traditionally employed “block” motion fields. The resulting compression system can support computationally efficient, high-quality temporal motion inference, which requires half as many coded motion fields as conventional codecs. Furthermore, “features” beyond compressibility — including high scalability, accessibility, and “intrinsic” framerate upsampling — can be seamlessly supported. These features are becoming ever more relevant as the way video is consumed continues shifting from the traditional broadcast scenario to interactive browsing of video content over heterogeneous networks.

This book is of interest to researchers and professionals working in multimedia signal processing, in particular those who are interested in next-generation video compression. Two comprehensive background chapters on scalable video compression and temporal frame interpolation make the book accessible for students and newcomers to the field.

View on Amazon View on AbeBooks View on Kobo View on B.Depository View on eBay View on Walmart

A key element of any modern video codec is the efficient exploitation of temporal redundancy via motion-compensated prediction. In this book, a novel paradigm of representing and employing motion information in a video compression system is described that has several advantages over existing approaches. Traditionally, motion is estimated, modelled, and coded as a vector field at the target frame it predicts. While this “prediction-centric” approach is convenient, the fact that the motion is “attached” to a specific target frame implies that it cannot easily be re-purposed to predict or synthesize other frames, which severely hampers temporal scalability. 

In light of this, the present book explores the possibility of anchoring motion at reference frames instead. Key to the success of the proposed “reference-based” anchoring schemes is high quality motion inference, which is enabled by the use of a more “physical” motion representation than the traditionally employed “block” motion fields. The resulting compression system can support computationally efficient, high-quality temporal motion inference, which requires half as many coded motion fields as conventional codecs. Furthermore, “features” beyond compressibility — including high scalability, accessibility, and “intrinsic” framerate upsampling — can be seamlessly supported. These features are becoming ever more relevant as the way video is consumed continues shifting from the traditional broadcast scenario to interactive browsing of video content over heterogeneous networks.

This book is of interest to researchers and professionals working in multimedia signal processing, in particular those who are interested in next-generation video compression. Two comprehensive background chapters on scalable video compression and temporal frame interpolation make the book accessible for students and newcomers to the field.

More books from Springer Singapore

Cover of the book Innovation and IPRs in China and India by Dominic Rüfenacht
Cover of the book Phenomenological Structure for the Large Deviation Principle in Time-Series Statistics by Dominic Rüfenacht
Cover of the book Acute Ischemic Stroke by Dominic Rüfenacht
Cover of the book Excitonic and Photonic Processes in Materials by Dominic Rüfenacht
Cover of the book Success in Higher Education by Dominic Rüfenacht
Cover of the book Algebraic Properties of Generalized Inverses by Dominic Rüfenacht
Cover of the book Everyday Youth Literacies by Dominic Rüfenacht
Cover of the book International Symposium for Intelligent Transportation and Smart City (ITASC) 2017 Proceedings by Dominic Rüfenacht
Cover of the book Early Study-Abroad and Identities by Dominic Rüfenacht
Cover of the book Reconceptualizing Confucian Philosophy in the 21st Century by Dominic Rüfenacht
Cover of the book A Corpus-Based Approach to Clause Combining in English from the Systemic Functional Perspective by Dominic Rüfenacht
Cover of the book Amaranthus: A Promising Crop of Future by Dominic Rüfenacht
Cover of the book Ecophysiology, Abiotic Stress Responses and Utilization of Halophytes by Dominic Rüfenacht
Cover of the book Biological Invasions and Its Management in China by Dominic Rüfenacht
Cover of the book The Development of BRIC and the Large Country Advantage by Dominic Rüfenacht
We use our own "cookies" and third party cookies to improve services and to see statistical information. By using this website, you agree to our Privacy Policy