Imagine you could describe a five-story apartment building in simple words and instantly see a 3D model you can explore through mixed reality.
At Texas A&M’s College of Architecture, researchers are working on making this future real. Thanks to funding by the National Science Foundation (NSF), they are creating new tools that combine artificial intelligence (AI), augmented reality (AR), and spatial reasoning.
Dr. Wei Yan, a professor and researcher, leads these projects in the Department of Architecture. Yan’s research team includes doctoral students leading their own projects.
Together, they are building new tools that are changing how architecture is taught and practiced.
Describe A Building, Then See It Appear
What if you could start designing a building just by typing a sentence?
That’s what Text-to-Visual Programming GPT (Text2VP) does. It’s a new generative AI tool made by doctoral student Guangxi Feng. Yan said that generative AI can already create text, images, videos and even 3D models from text prompts.
Users can change the shape, size and layout without writing any code, guided by their architectural knowledge.
Normally, completing tasks in design software can take hours or days. Text2VP speeds up early design work, so designers can spend more time being creative instead of dealing with technical details.
Even though it’s still being developed, Yan said it could change the way students and professional designers start their projects.
Talk To Your Model, Get Instant Feedback
Doctoral student Farshad Askari created a chatbot that lets users “talk” to their 3D building models. After uploading a design, users can ask questions about its structure, layout or how well it works. The chatbot answers with text advice and helpful pictures. It can even compare the models to industry standards or sustainability goals.
The chatbot uses trusted information in a knowledge base and a live view of the uploaded building model with GPT-4o Vision to act like a real-time design assistant.
Soon, it could read detailed building data and work with standard document types like Industry Foundation Classes (IFC), allowing even deeper design checks.
Teaching AI To Understand Space Like People
Design isn’t just about shape and use. It also needs spatial intelligence: being able to picture, turn and move objects in 3D.
To study this problem, doctoral candidate Monjoree Uttamasha led an NSF-funded project testing AI models like ChatGPT, Llama and Gemini. They used the Revised Purdue Spatial Visualization Test, a common test for spatial intelligence. Their study won Best Paper in the Computer Vision category at the 2025 IEEE Conference on AI.
The results were clear: without extra context, AI models often failed to notice how shapes rotated or changed in space. Human participants outperformed the AI by a wide margin.
However, when given simple visual guides and math notations, the AI got a lot better. These findings show that AI can learn spatial thinking, but it needs more training with background information.
With the right help, AI tools can start to think more like human designers. Yan’s team sees this project, along with others in their lab, as a step toward improving AI technology and how design is taught.
Source: Texas A&M University (Edited by Subcontractors USA)

