This document specifies:- physical and material parameters of virtual or real objects expressed to support comprehensive haptic rendering methods, such as stiffness, friction and micro-textures; and- a flexible specification of the haptic rendering algorithm itself. It supplements other standards that describe scene or content description and information models for virtual and mixed reality, such as ISO/IEC 19775 and ISO/IEC 3721-1.
This document specifies the syntax, semantics and decoding for visual volumetric media using video‑based coding methods. This document also specifies processes that can be needed for reconstruction of visual volumetric media, which can also include additional processes such as post‑decoding, pre-reconstruction, post‑reconstruction and adaptation.
The present document collects information on eXtended Reality (XR) in the context of 5G radio and network services. The primary scope of the present document is the documentation of the following aspects:- introducing Extended Reality by providing definitions, core technology enablers, a summary of devices and form factors, as well as ongoing related work in 3GPP and elsewhere,- collecting and documenting core use cases in the context of Extended Reality,- identifying relevant client and network architectures, APIs and media processing functions that support XR use cases,- analysing and identifying the media formats (including audio and video), metadata, accessibility features, interfaces and delivery procedures between client and network required to offer such an experience,- collecting key performance indicators and Quality-of-Experience metrics for relevant XR services and the applied technology components, and- drawing conclusions on the potential needs for standardisation in 3GPP.
ISO 32000-1:2008 specifies a digital form for representing electronic documents to enable users to exchange and view electronic documents independent of the environment in which they were created or the environment in which they are viewed or printed. It is intended for the developer of software that creates PDF files (conforming writers), software that reads existing PDF files and interprets their contents for display and interaction (conforming readers) and PDF products that read and/or write PDF files for a variety of other purposes (conforming products).
ISO/IEC 14772, the Virtual Reality Modeling Language (VRML), defines a file format that integrates 3D graphics and multimedia. Conceptually, each VRML file is a 3D time-based space that contains graphic and aural objects that can be dynamically modified through a variety of mechanisms. This part of ISO/IEC 14772 defines a primary set of objects and mechanisms that encourage composition, encapsulation, and extension. The semantics of VRML describe an abstract functional behaviour of time-based, interactive 3D, multimedia information. ISO/IEC 14772 does not define physical devices or any other implementation-dependent concepts (e.g., screen resolution and input devices). ISO/IEC 14772 is intended for a wide variety of devices and applications, and provides wide latitude in interpretation and implementation of the functionality. For example, ISO/IEC 14772 does not assume the existence of a mouse or 2D display device. Each VRML file:a. implicitly establishes a world coordinate space for all objects defined in the file, as well as all objects included by the file;b. explicitly defines and composes a set of 3D and multimedia objects;c. can specify hyperlinks to other files and applications; andd. can define object behaviours.An important characteristic of VRML files is the ability to compose files together through inclusion and to relate files together through hyperlinking. For example, consider the file earth.wrl which specifies a world that contains a sphere representing the earth. This file may also contain references to a variety of other VRML files representing cities on the earth (e.g., fileparis.wrl). The enclosing file, earth.wrl, defines the coordinate system that all the cities reside in. Each city file defines the world coordinate system that the city resides in but that becomes a local coordinate system when contained by the earth file. Hierarchical file inclusion enables the creation of arbitrarily large, dynamic worlds. Therefore, VRML ensures that each file is completely described by the objects contained within it. Another essential characteristic of VRML is that it is intended to be used in a distributed environment such as the World Wide Web. There are various objects and mechanisms built into the language that support multiple distributed files, including:a. in-lining of other VRML files;b. hyperlinking to other files;c. using established Internet and ISO standards for other file formats; andd. defining a compact syntax.
The objective of this document is to propose an extension to the existing standard for the information model for representing the mixed and augmented reality scene/contents description, namely:1) Extending the existing and conventional constructs for representing the virtual reality scene graph and structure such that a comprehensive range of mixed and augmented reality contents can also be represented.2) As part of the extension, representing physical objects in the mixed and augmented reality scene targeted for augmentation.3) As part of the extension, representing physical objects as augmentation to other (virtual or physical) objects in the mixed and augmented reality scene.4) Providing ways to spatially associate aforementioned physical objects with the corresponding target objects (virtual or physical) in the mixed and augmented reality scene.5) Other necessary functionalities and abstractions that will support the dynamic MAR scene description such as event/data mapping, and dynamic augmentation behaviours.6) Describing the association between these constructs and the MAR system which is responsible for taking and interpreting this information model and rendering/presenting it out through the MAR display device.The document also provides definitions for terms as related to these MAR content informational components and their attributes. The target audience of this document are mainly MAR system developers and contents designers interested in specifying MAR contents to be played by an MAR system or browser. The standard will provide a basis for further application standards or file formats for any virtual and mixed reality applications and content representation. The extension will be self-contained in the sense that it is independent from the existing virtual reality information constructs, focusing only on the mixed and augmented reality aspects. However, this document only proposes the information model, and neither promotes nor proposes to use a specific language, file format, algorithm, device, implementation method, and standard. The proposed model is to be considered as the minimal basic model that can be extended for other purposed in actual implementation.
The present document presents and classifies industrial use cases for AR applications and services. It forms the basis for the requirements document to be drafted ETSI GS ARF 004: Augmented Reality Framework (ARF) Interoperability Requirements for AR components, systems and services.
The technologies of this document specified are:- Description languages and vocabularies to characterize devices and users;- Control information to fine tune the sensed information and the actuator command for the control of virtual/real worlds, i.e., user's actuation preference information, user's sensor preference information, actuator capability description, and sensor capability description. The adaptation engine is not within the scope of this document. This document specifies syntax and semantics of the tools required to provide interoperability in controlling devices (actuators and sensors) in real as well as virtual worlds: Control Information Description Language (CIDL) as an XML schema-based language which enables one to describe a basic structure of control information;- Device Capability Description Vocabulary (DCDV), an XML representation for describing capabilities of actuators such as lamps, fans, vibrators, motion chairs, scent generators, etc.;- Sensor Capability Description Vocabulary (SCDV), interfaces for describing capabilities of sensors such as a light sensor, a temperature sensor, a velocity sensor, a global position sensor, an intelligent camera sensor, etc.;- Sensory Effect Preference Vocabulary (SEPV), interfaces for describing preferences of individual user on specific sensorial effects such as light, wind, scent, vibration, etc.; and- Sensor Adaptation Preference Vocabulary (SAPV), interfaces for describing preferences on a sensor of an individual user on each type of sensed information.
The technologies specified in this document are description languages and vocabularies which describe sensorial effects. The adaptation engine is not within the scope of this document (or the ISO/IEC 23005 series). This document specifies syntax and semantics of the tools describing sensory information to enrich audio-visual contents: Sensory Effect Description Language (SEDL) as an XML schema-based language which enables one to describe a basic structure of sensory information; Sensory Effect Vocabulary (SEV), an XML representation for describing sensorial effects such as light, wind, fog, vibration, etc. that trigger human senses.
This document specifies a video coding technology known as essential video coding (EVC), which contains syntax format, semantics and an associated decoding process. The decoding process is designed to guarantee that all EVC decoders conform to a specified combination of capabilities known as the profile, level and toolset. Any decoding process that produces identical cropped decoded output pictures to those produced by the described process is considered to be in conformance with the requirements of this document. This document is designed to cover a wide range of application, including but not limited to digital storage media, television broadcasting and real-time communications.
This document provides definitions of data types and tools, which are used in other parts of the ISO/IEC 23005 series, but are not specific to a single part. This document specifies syntax and semantics of the data types and tools common to the tools defined in the other parts of the ISO/IEC 23005 series, such as basic data types which are used as basic building blocks in more than one of the tools in the ISO/IEC 23005 series, colour-related basic types which are used in light and colour‑related tools to help in specifying colour-related characteristics of the devices or commands, and time stamp types which can be used in device commands, and sensed information to specify timing related information. Classification schemes, which provide semantics of words or terms and normative way of referencing them, are also defined in Annex A, if they are used in more than one part of the ISO/IEC 23005 series. The tools defined in this document are not intended to be used alone, but to be used as a part or as a supporting tool of other tools defined in other parts of the ISO/IEC 23005 series, except for the profile and level definitions. This document also contains standard profiles and levels to be used in specific application domains. The profile and level definitions include collection of tools from ISO/IEC 23005-2 and ISO/IEC 23005-5 with necessary constraints.
The data fields, types, and formats related to digital assets to improve digital asset identification efficiency are defined by this standard. Moreover, guidance for blockchain-based digital asset identification is provided by the definition and description of methods and data structures in this standard.