It is a very fun project. We use photogrammetry technology to scan the University of Toronto Campus in the real world and rebuild it in the digital world. Students can play roles in this digital campus and finish their study tasks in the real world. In other words, students can play the game while finishing the daily study tasks, isn't heard excited? We also provide crafted game plots and conversation to help game players pass main buildings through major roads, which would be beneficial to those people who need to be more familry with the campus.
However, it is pretty challenging not only on the technology side but also in the narrative building. Please scroll down and see how we overcome the difficulty and finally achieved it!
Due to the Pandemic period, people have gotten used to remote study and work. Think about this situation: there is a new semester coming soon, and new freshers are excited to come to school. Before they arrive, they can log in to the online campus and get familiar with the campus buildings and facilities through gaming. Those who miss the orientation or taking online courses definitely can also try this method. They can make friends and meet in person after they arrive on campus. This digital way of living can remain after the pandemic and help boost communication and networking on campus. This method can be purposely applied widely in the world.
The following drawing shows the main buildings on the yellow sticker.
One of our team member is very good at writing the game plot, he is mainly responsible for the plot design.
Based on the game plot, I designed the user journey map.
After finishing the concept and game plot design, we start to build our digital model using photogrammetry. I firstly did lots of tests on my face and hands in order to master this technology. I failed many times and documented this processing.
Shotting: after many times failures on creating points cloud through continuous pictures generted by After Affects, Finally, I found key points on shotting objects.
First, the environment where the object is being placed is very important. It is necessary to have some setups on the surroundings. In other words, avoid a clean and mono-colour background; especially, the object is too small or thin, identical in all angles, which is not easy to recognize in Argisoft. The relatively messy surroundings would help find the shooting angles to increase the possibilities of getting a completed, correct ordered scanned object on Argisoft. Second, the movement should be slow enough to catch more pictures after breaking them down by After Effects. And the object should always be inside the shooting frame, except the situation which needs to record the object's details by closely shooting part of the object.
Thrid, pre-planned the shooting trails, it should be circling the object, or other organized ways, as long as not shooting the object by random pathway.
Fourth, it is better to shoot the object by walking or moving around instead of letting the object itself move with a fixed shooting position. In the latter situation, it is mandatory to keep a mono color background.
If following the notes above, it is possible to get a successful points cloud in Agisoft. However, to get a more detailed and clear digital model, I scan the hand and face separately to capture two objects from different angles. If I scan them as a whole, some overlay placed may not be captured.
So a big challenge for me is combining those two parts into one digital model. I try various ways on Agisoft, such as combining two chunks in the workspace, but all end failure. So final, I got this work done by importing the two models separately into a Blender and combining them together.
Right now, with previous experience in making the digital face and hands, we are good to go! Let's make small-scale objects as well as large spaces.
Takeaways: in this project, I failed many times and gradually mastered how to use photogrammetry. It tells the importance of testing, which helps find room to improve, make iterations, and finally, get a better product version and user experience. The space in the game cannot be accurate as in reality. Thus, we question what effects are brought by spatial experience built environment and if this compounded spatial experience influences how people perceive the space resulting in emotional changes.
The isometric view displays the entire scope of the ongoing scene. Registered members will appear in the same scenario, and clicking on each character will reveal their profile. Users can also add friends and utilize various functions on the interface, such as a calendar, daily tasks, a plaza for group chats, and a map for navigation. The icons in the bottom-left corner assist in transitioning from the isometric view to a perspective view.