Built in Atlanta: The OSS Tool Cutting UX Research Time
How an Atlanta-born open-source AI synthesis tool is helping UX researchers cut analysis time while preserving qualitative depth across fintech and logistics teams.
Built in Atlanta: The OSS Tool Cutting UX Research Time
When a UX researcher at a midtown Atlanta fintech firm finished a round of 22 user interviews last fall, she faced the familiar wall: hours of transcripts, a Miro board threatening to collapse under sticky notes, and a stakeholder demo in four days. She'd heard about Meridian, an open-source AI synthesis framework quietly developed by a small team out of Georgia Tech's CREATE-X program, and decided to run her notes through it. She got her affinity clusters in under two hours. More importantly, she told her team, nothing felt lost.
That's the promise — and the hard problem — at the center of how Atlanta's UX research community is approaching AI synthesis tools right now. The city's concentration of fintech firms, logistics platforms, and HBCU-connected design talent creates a particular kind of pressure on researchers: move fast enough for startup timelines, but handle data with the sensitivity that financial and supply-chain users deserve. Meridian, and the conversation around it, sits right at that intersection.
The Hook: What Meridian Is and Why It Matters
Meridian is an open-source qualitative data synthesis library — not a chat interface bolted onto your transcripts, but a structured pipeline that takes tagged interview data, observation notes, or diary study entries and surfaces thematic clusters with configurable confidence thresholds. Researchers can inspect every association the model makes, flag disagreements, and export structured findings in formats that slot directly into common research repositories.
What separates it from the wave of commercial AI research tools is the explainability layer. Every theme the tool surfaces comes with a provenance trace — which quotes, which participants, which coded segments contributed to it. For teams operating in regulated industries, that auditability isn't a nice-to-have. It's the reason legal and compliance will let you use it at all.
Origin: Where It Started
Meridian grew out of a capstone project inside Georgia Tech's CREATE-X accelerator program in 2024. The founding contributors — a mixed group of HCI graduate students and product design undergrads — were doing volunteer UX research for two Atlanta nonprofit organizations and drowning in synthesis work. Their faculty advisor pushed them to open-source whatever tooling they built, on the grounds that the Atlanta civic tech community would benefit from shared infrastructure.
The initial repo was modest: a Python library, sparse documentation, and a Slack channel with about thirty members, mostly pulled from the Atlanta developer groups and the Georgia Tech design research community. The team made one early decision that shaped everything: they built around JSONL-formatted research notes rather than raw audio or video. That meant Meridian worked with data researchers had already processed — it wasn't trying to replace the human work of observation and note-taking, only the mechanical work of pattern-finding across large corpora.
That boundary turned out to matter enormously for adoption.
Adoption: Who's Actually Using It
Within about eight months of the first public release, Meridian had contributors from several directions that don't often overlap:
- Fintech product teams at firms in the Buckhead and Midtown corridors, where researchers are running frequent usability studies on mobile banking and payment flows and need to turn findings around in days, not weeks
- Logistics and supply-chain UX teams, where user research often involves complex mental models from dispatchers and warehouse operators — exactly the kind of nuanced qualitative data that generic summarization tools flatten badly
- HBCU design programs, particularly faculty at Morehouse and Clark Atlanta using Meridian in graduate research methods courses as a teaching tool for responsible AI-assisted analysis
- Independent researchers doing contract work across Atlanta's growing startup scene, who don't have the budget for enterprise research platforms but need credible, auditable synthesis output
The logistics sector adoption is worth pausing on. Atlanta sits at an unusual crossroads in that industry — the city hosts major distribution infrastructure and a growing cluster of supply-chain software companies. UX research in that space tends to involve participants who are domain experts, often skeptical of researchers, with specialized vocabulary. Meridian's tagging system, which lets researchers define their own coding schemas before synthesis rather than accepting the model's defaults, maps well onto that kind of domain-specific work.
Design Choices That Paid Off (and One That Didn't)
What worked:
The provenance trace was the right call, even though it added significant complexity to the initial build. Researchers across every sector citing Meridian point to it as the feature that made the tool trustworthy rather than just fast. When you can show a stakeholder exactly which participant said what, and how that quote connected to a broader theme, you're having a different kind of research conversation.
The decision to build around structured notes rather than raw media also paid off in an unexpected way: it made Meridian accessible to researchers whose organizations have strict data-handling policies around audio and video. Processed notes, often stripped of identifying details, clear a much lower compliance bar.
What didn't:
The team's first attempt at a web interface was quietly deprecated after three months. It added maintenance overhead and, more importantly, introduced a layer of abstraction that made the provenance trace harder to inspect. The community pushed back, and the team listened — stripping the interface back to a CLI and a clean Python API. That was a painful call, but the right one. If you want a GUI layer, several community members have built their own wrappers and shared them in the project's discussion forum.
How Atlanta UX Researchers Can Contribute or Learn From It
If you're a UX researcher or designer in Atlanta looking to engage with Meridian, there are several practical entry points:
- Use it on a low-stakes project first. Run a past research dataset through Meridian alongside your existing synthesis process and compare outputs. The goal isn't to replace your judgment — it's to calibrate your trust in the tool before it matters.
- Contribute domain-specific coding schemas. The project maintainers have explicitly asked for community-contributed schema libraries for specific industries. If you do logistics UX or fintech research, your coding vocabulary is genuinely useful to other researchers.
- Engage with the documentation gap. The technical docs are solid; the practitioner-facing guides are thin. If you've run Meridian on a real project and can write clearly, that's where the contribution need is highest right now.
- Show up to the synthesis. The team runs a monthly working session, often coordinated through Atlanta tech meetups channels, where contributors review proposed features against real research use cases. It's unusually grounded for an OSS project.
For researchers earlier in their AI tooling journey, there's a companion localized explainer on AI-assisted research workflows for Atlanta teams that covers the broader landscape Meridian fits into — including when not to reach for an AI synthesis tool and what questions to ask before you do.
And if you're a developer interested in the HCI side of this work, the five questions practitioners ask about AI research tools piece is worth your time before you touch the codebase.
The broader job market for UX researchers with AI tooling fluency is also shifting — worth browsing current tech roles if you're curious where Atlanta teams are hiring.
FAQ
Can Meridian handle research data from studies with sensitive populations?
It depends on your data preparation practices. Meridian operates on structured notes in JSONL format, not raw recordings, which gives researchers control over what personal or sensitive information enters the pipeline. Many teams working with sensitive populations run a de-identification step before processing. The project's documentation includes a section on data handling considerations, but your organization's IRB or data governance team should have final say on what's appropriate for your context.
How much Python experience do you need to contribute to Meridian?
For using the library, intermediate Python comfort is enough — you're mostly configuring pipelines and running scripts. For contributing to the core codebase, the maintainers ask that contributors be comfortable with Python typing conventions and have some familiarity with NLP pipelines. The project's contribution guide is genuinely beginner-friendly for documentation and schema contributions, which don't require deep Python knowledge at all.
Find more UX and design voices in the Atlanta tech community at /atlanta.