From Visualization to Analysis: A Spectrum of Documentation Tools
IV. From Visualization to Analysis: A Spectrum of Documentation Tools
This section introduces a core insight from this project: that the power of immersive documentation tools lies not only in what they capture, but in who controls the processing of that data. The distinction between visualization and analysis technologies is not merely about resolution or price—it is about infrastructure, access, and the capacity to tell stories with consequence. Visualization tools, typically mobile, lightweight, and cloud-enabled, allow for the rapid generation of visual representations. Analysis tools, by contrast, require local processing, technical fluency, and robust computing environments. The former democratizes viewing; the latter enables spatial reasoning, conservation planning, and legal advocacy. And the choice between them often determines which sites—and which communities—are seen, interpreted, or preserved.
These categories exist on a continuum. At one end are visualization tools, such as smartphones and tablets paired with apps like Scaniverse, Polycam, and Luma AI.1 These tools use photogrammetry and LiDAR sensors to generate lightweight mesh models, processed on-device or in the cloud. Their primary strength lies in their accessibility: they require no technical training, cost less than $1,200, and allow for quick uploads to platforms like Sketchfab. In the field, this means a single user can document a site in minutes and publish it online without ever opening a 3D editing program.
At the other end are analysis tools—professional-grade terrestrial scanners like the Leica RTC360 and BLK360, paired with specialized software such as Leica Cyclone Register 360 Plus.2 These devices produce millimeter-accurate point clouds that capture the spatial relationships of buildings, objects, and terrain with forensic precision. But this power comes at a cost: the devices range from $22,000 to $80,000, and the datasets they generate can easily exceed 200GB per site.3 Processing such data requires workstation-class computers, 128GB RAM, advanced GPUs (such as the NVIDIA RTX 4070 Ti), and deep familiarity with 3D workflows.4
The difference is not just technical—it is epistemological. Visualization tools process data for the user. Analysis tools require the user to process data themselves. This creates vastly different relationships to knowledge production. In a visualization workflow, scanning concludes the labor. In an analysis workflow, scanning is only the beginning.
These stakes came into sharp focus during the documentation of the Robinson Family Home.5 Using the RTC360, we captured six interior and exterior scans in under twenty minutes. Once imported into Cyclone, Dr. Benjamin Daniels of Tuskegee University aligned and registered the scans, creating a high-resolution, geospatially accurate 3D model suitable for architectural modeling, historical restoration, and interpretive planning. This process required access to lab infrastructure, software licenses, and the technical fluency to troubleshoot point cloud alignment—a level of capacity not available to most descendants or community members.
This gap reflects a deeper form of infrastructural inequality. Just as heirs property regimes have structurally excluded Black landowners from legal protections, digital preservation systems risk excluding communities from interpretive sovereignty by concentrating analytic capacity in well-resourced institutions. As Andrea Roberts writes in her work on Texas Freedom Colonies, preservation is as much about political leverage and archival control as it is about material survival.6 A mobile scan shared on Sketchfab can raise awareness. But an analysis-grade model can support zoning appeals, grant applications, historical nominations, and public memory campaigns.
The contextual difference between tools is also critical. Using the RTC360, we documented the full perimeter of the Tabby Ruins in roughly ten minutes with four scans. The iPhone 16 Pro Max, using Scaniverse, took more than twice as long and failed to capture the environmental features—slope, trees, foundation layering—that shape interpretation.7 The BLK360, while lower resolution than the RTC360, proved more versatile in unstable areas: it could be handheld in tight spaces where the RTC’s tripod was unsafe to deploy. As Tuskegee’s documentation notes show, choosing the right tool depends on structural condition, lighting, and intended use case.8
In this light, visualization and analysis are not just technical choices—they are political ones. They determine what kinds of cultural memory are made legible, to whom, and for what purposes. The risk is not just under-documentation, but misrepresentation. A scan without context becomes an object without story, flattened into an aestheticized relic.
Recognizing this spectrum has shaped this project’s methodology. Visualization tools enabled rapid community-centered documentation and storytelling; analysis tools, deployed through institutional partnerships, produced archival-grade outputs for future conservation and advocacy. This hybrid approach positions immersive documentation not just as representational practice, but as reparative infrastructure—providing the evidentiary base and narrative depth needed to mobilize resources, generate public visibility, and support community-led claims to land, memory, and repair.
References
- Apple, iPhone 16 Pro Max Technical Specifications, https://support.apple.com/; Polycam, Pricing, https://polycam.ai/pricing; Scaniverse, How to Use, https://scaniverse.com/
- Leica Geosystems, RTC360 and BLK360 Specification Sheets, https://leica-geosystems.com
- Field notes, Daufuskie documentation sessions, April 2025
- Tuskegee lab specs: 13th gen Intel Core i9, 128GB RAM, NVIDIA RTX 4070 Ti, 64-bit OS
- Field notes and interview with Dr. Benjamin Daniels, Tuskegee University, April 2025
- Andrea Roberts, Texas Freedom Colonies Project, https://www.thetexasfreedomcoloniesproject.com/
- Field notes, scan comparison of Tabby Ruins using iPhone vs RTC360, April 2025
- Technical notes from Tuskegee University documentation lab, shared April 2025