




This research explores novel approaches to three-dimensional content creation and immersive environment generation based on the Apple Vision Pro platform. With the rapid development of metaverse and XR technologies [1,2,5], efficient and intuitive 3D creation tools have become critical bridges connecting virtual and real worlds. Traditional creation methods reveal significant limitations when facing complex interaction requirements and massive content production [6,7], particularly in natural interaction experience and creation efficiency. To address these challenges, we developed the Dream Space system, integrating multiple generative AI technologies [9] to achieve intelligent conversion from 2D images to high-quality 3D models [10], combined with SwiftUI and RealityKit frameworks for immersive presentation. The system deeply integrates the Blockade Lab API, supporting real-time generation and customisation of 360-degree panoramic environments based on text prompts, allowing users to intuitively create and modify the atmosphere, lighting, and spatial composition of virtual spaces.
Dream Space fully utilises Vision Pro's spatial computing capabilities, implementing natural interaction methods such as gesture recognition and gaze tracking through multimodal interaction design [13]. Users can directly manipulate three-dimensional content through intuitive gestures in immersive environments, whilst the system's real-time feedback mechanism significantly enhances creation fluidity. We also enriched the immersive experience using RealityKit's spatial audio and physics simulation features, creating a more natural and engaging interactive environment.
Through a mixed-methods evaluation with 20 participants from diverse backgrounds, we validated the system's significant advantages in model generation quality, environment rendering effects, and interaction responsiveness. Research data demonstrates that Dream Space enabled participants to create simple scenes in just 20-30 minutes, with even novice users successfully completing creative tasks within this timeframe. Users also reported high satisfaction with the system. These results confirm that spatial computing-based creation paradigms can effectively lower the technical barriers to 3D content creation whilst providing more natural and efficient creative experiences。
Project 3:Dream Space: Real-time Multimodal Interaction and AI Experience in Spatial Computing Based on Vision Pro

With the rapid development of Virtual Reality (VR) technology, there is a growing demand for high-quality realistic 3D models. Traditional modelling methods struggle to meet the needs of large-scale customization, facing challenges in efficiency and quality. This paper introduces a deep-learning framework that generates high-precision realistic 3D coral models from a single image. The framework utilizes the Coral dataset to extract geometric and texture features, perform 3D reconstruction, and optimize design and material blending. The innovative 3D model optimization and polygon count control ensure shape accuracy, maximize detail retention, and flexibly output models of varying complexities, catering to high-quality rendering, real-time display, and interaction needs.
In this project, we have incorporated Explainable AI (XAI) to transform AI-generated models into interactive "artworks". The output can be seen more effectively in VR and XR than on a 2D screen, making it more explainable and easier to evaluate. This interdisciplinary exploration expands the expressiveness of XAI, enhances human-machine collaboration efficiency, and opens up new pathways for bridging the cognitive gap between AI and the public. Real-time feedback is integrated into VR interactions, allowing the system to display information such as coral species, habitat, and morphology as users explore and manipulate the coral model, enhancing model interpretability and interactivity. The generated models surpass traditional methods in detail representation, visual quality, and computational efficiency. This research provides an efficient and intelligent approach to 3D content creation for VR, potentially lowering production barriers, increasing productivity, and promoting the widespread application of VR. Additionally, incorporating explainable AI into the workflow provides new perspectives for understanding AI-generated visual content and advancing research in the interpretability of 3D vision.
Project 2:Coral Model Generation from Single Images for Virtual Reality Applications


This research presents an innovative Web VR-based climate change visualisation system, integrating multidimensional data from NASA's Climate Change Database through A- Frame technologies. Our dual-interface system enables intuitive exploration via both desktop and VR platforms, focusing on key climate indicators including CO2 concentration, global temperature, ice coverage, and sea levels.
A comparative study with 46 participants revealed meaningful improvements in data comprehension through immersive visualisation, particularly in temperature data retention (7% increase) and climate pattern understanding. Most notably, 76% of users reported significantly enhanced engagement with interactive 3D visualisation compared to traditional 2D methods while identifying technical optimisation needs in loading performance and cross- platform compatibility. This research establishes a practical framework for immersive environmental data visualisation, contributing to both theoretical advancement and public climate change communication.
Project 1:Immersive Climate Insights: Design and Evaluation of a Web VR- based Climate Change Data Visualisation






























