Information technology - Coded representation of immersive media - Part 9: Geometry-based point cloud compression
This document specifies geometry-based point cloud compression.
This document specifies geometry-based point cloud compression.
This document specifies extensions to existing scene description formats in order to support MPEG media, in particular immersive media. MPEG media includes but is not limited to media encoded with MPEG codecs, media stored in MPEG containers, MPEG media and application formats as well as media provided through MPEG delivery mechanisms. Extensions include scene description format syntax and semantics and the processing model when using these extensions by a Presentation Engine. It also defines a Media Access Function (MAF) API for communication between the Presentation Engine and the Media Access Function for these extensions. While the extensions defined in this document can be applicable to other scene description formats, they are provided for ISO/IEC 12113.
This document specifies the method of motion capture animation using H-Anim humanoid models. Each humanoid model consists of an articulated character with specified joints and motion capture data. As specified in ISO/IEC 19774-1, each character consists of joints and segments in a hierarchical structure. This document includes the following:(1) Concepts of motion capture as related to humanoid animation,(2) Concepts of motion capture data definition,(3) Definition of motion parameters and motion-capture animation data for transferring or exchanging motion between different humanoid character models,(4) Mapping the structure of motion capture data to the structure of H-Anim objects,(5) HAnim motion capture animation using interpolators,(6) HAnim motion definition using H-Anim Motion objects, and(7) A method for generating and specifying an H-Anim motion capture animation.This document specifies a standard technique for exchanging humanoid animation using motion capture. It does not mandate using any specific run-time system to render the H-Anim characters or animations.
This document specifies a systematic system for representing humanoids in a network-enabled 3D graphics and multimedia environment. Conceptually, each humanoid is an articulated character that can be embedded in different representation systems and animated using the facilities provided by the representation system. This document specifies the abstract form and structure of humanoids. Further, this document specifies the semantics of humanoid animation as an abstract functional behaviour of time-based, interactive 3D, multimedia articulated characters. This document does not define physical shapes for such characters but does specify how such characters can be structured for animation. This document is intended for a wide variety of presentation systems and applications, providing wide latitude in interpretation and implementation of the functionality.
This document is intended to provide a generic extensible full body image data format for biometric recognition applications requiring exchange of human full body image data. Typical applications are: a) automated body biometric verification and identification of an unknown individual or cadaver (one-to-one as well as one-to-many comparison);b) support for human verification of identity by comparison of individuals against full body images; andc) support for human examination of full body images with sufficient resolution to allow a human examiner to verify identity or identify a living individual or a cadaver.This document ensures that full human body images and image sequence data generated by video surveillance and other similar systems are suitable for identification and verification. The structure of the data format in this document is compatible with ISO/IEC 39794-5. In addition to the data format, this document specifies application-specific profiles including scene constraints, photographic properties and digital image attributes like image spatial sampling rate, image size, etc. These application profiles are contained in a series of annexes. The 3D encoding types "3D point map" and "range image" are not supported by this document.
This document describes guidelines for developing education and training systems using VR/AR/MR technology. It defines VR/AR/MR based information modelling that can be used for education and training systems. It provides procedures and methods to be used when developing 3D VR/AR/MR based education and training systems using ISO/IEC JTC 1 standards. It also provides a systematic approach to developing VR/AR/MR based applications for systems integration areas. This work will:- define concepts of VR/AR/MR based education and training.- define an information modelling architecture for VR/AR/MR based education and training systems.- specify standards based functional components for VR/AR/MR based education and training systems.- specify framework components for implementing VR/AR/MR based education and training systems.- include use cases for VR/AR/MR based education and training systems based on the information modelling architecture.Device hardware technology for VR/AR/MR based education and training systems is excluded from this draft.
ISO/IEC 29146:2016 defines and establishes a framework for access management (AM) and the secure management of the process to access information and Information and Communications Technologies (ICT) resources, associated with the accountability of a subject within some context.This International Standard provides concepts, terms and definitions applicable to distributed access management techniques in network environments.This International Standard also provides explanations about related architecture, components and management functions.The subjects involved in access management might be uniquely recognized to access information systems, as defined in ISO/IEC 24760.The nature and qualities of physical access control involved in access management systems are outside the scope of this International Standard.
This document specifies an image-based representation model that represents target objects/environments using a set of images and optionally the underlying 3D model for accurate and efficient objects/environments representation at an arbitrary viewpoint. It is applicable to a wide range of graphic, virtual reality and mixed reality applications which require the method of representing a scene with various objects and environments. This document:(1) defines terms for image-based representation and 3D reconstruction techniques;(2) specifies the required elements for image-based representation;(3) specifies a method of representing the real world in the virtual space based on image-based representation;(4) specifies how visible image patches can be integrated with the underlying 3D model for more accurate and rich objects/environments representation from arbitrary viewpoints;(5) specifies how the proposed model allows multi-object representation; and(6) provides an XML based specification of the proposed representation model and an actual implementation example (see Annex A).
This document specifies:- physical and material parameters of virtual or real objects expressed to support comprehensive haptic rendering methods, such as stiffness, friction and micro-textures; and- a flexible specification of the haptic rendering algorithm itself. It supplements other standards that describe scene or content description and information models for virtual and mixed reality, such as ISO/IEC 19775 and ISO/IEC 3721-1.
This document specifies the syntax, semantics and decoding for visual volumetric media using video‑based coding methods. This document also specifies processes that can be needed for reconstruction of visual volumetric media, which can also include additional processes such as post‑decoding, pre-reconstruction, post‑reconstruction and adaptation.
ISO/IEC 14772, the Virtual Reality Modeling Language (VRML), defines a file format that integrates 3D graphics and multimedia. Conceptually, each VRML file is a 3D time-based space that contains graphic and aural objects that can be dynamically modified through a variety of mechanisms. This part of ISO/IEC 14772 defines a primary set of objects and mechanisms that encourage composition, encapsulation, and extension. The semantics of VRML describe an abstract functional behaviour of time-based, interactive 3D, multimedia information. ISO/IEC 14772 does not define physical devices or any other implementation-dependent concepts (e.g., screen resolution and input devices). ISO/IEC 14772 is intended for a wide variety of devices and applications, and provides wide latitude in interpretation and implementation of the functionality. For example, ISO/IEC 14772 does not assume the existence of a mouse or 2D display device. Each VRML file:a. implicitly establishes a world coordinate space for all objects defined in the file, as well as all objects included by the file;b. explicitly defines and composes a set of 3D and multimedia objects;c. can specify hyperlinks to other files and applications; andd. can define object behaviours.An important characteristic of VRML files is the ability to compose files together through inclusion and to relate files together through hyperlinking. For example, consider the file earth.wrl which specifies a world that contains a sphere representing the earth. This file may also contain references to a variety of other VRML files representing cities on the earth (e.g., fileparis.wrl). The enclosing file, earth.wrl, defines the coordinate system that all the cities reside in. Each city file defines the world coordinate system that the city resides in but that becomes a local coordinate system when contained by the earth file. Hierarchical file inclusion enables the creation of arbitrarily large, dynamic worlds. Therefore, VRML ensures that each file is completely described by the objects contained within it. Another essential characteristic of VRML is that it is intended to be used in a distributed environment such as the World Wide Web. There are various objects and mechanisms built into the language that support multiple distributed files, including:a. in-lining of other VRML files;b. hyperlinking to other files;c. using established Internet and ISO standards for other file formats; andd. defining a compact syntax.
The technologies of this document specified are:- Description languages and vocabularies to characterize devices and users;- Control information to fine tune the sensed information and the actuator command for the control of virtual/real worlds, i.e., user's actuation preference information, user's sensor preference information, actuator capability description, and sensor capability description. The adaptation engine is not within the scope of this document. This document specifies syntax and semantics of the tools required to provide interoperability in controlling devices (actuators and sensors) in real as well as virtual worlds: Control Information Description Language (CIDL) as an XML schema-based language which enables one to describe a basic structure of control information;- Device Capability Description Vocabulary (DCDV), an XML representation for describing capabilities of actuators such as lamps, fans, vibrators, motion chairs, scent generators, etc.;- Sensor Capability Description Vocabulary (SCDV), interfaces for describing capabilities of sensors such as a light sensor, a temperature sensor, a velocity sensor, a global position sensor, an intelligent camera sensor, etc.;- Sensory Effect Preference Vocabulary (SEPV), interfaces for describing preferences of individual user on specific sensorial effects such as light, wind, scent, vibration, etc.; and- Sensor Adaptation Preference Vocabulary (SAPV), interfaces for describing preferences on a sensor of an individual user on each type of sensed information.