JUN WANG
Writing

Systems thinking

Writing

I designed this data architecture, but I could not see the database.

As a product manager, I knew what the data architecture of this product should look like. The challenge was not deciding the structure. The challenge was verifying whether my product intent had actually been implemented in the right way.

Author

Jun Wang writes about product structure, UX clarity, and using AI to turn product intent into working systems.

  • Focus: Systems thinking, product architecture, implementation alignment
  • Read time: 9 min
  • Context: Based on real work across Soulful Asia web, app, and course platform

As a product manager and product designer, I have always known what the data architecture of the products I own should look like.

I independently led the product design and implementation of the Soulful Asia digital platform. Soulful Asia is a Sydney-based non-profit association registered with the Australian Charities and Not-for-profits Commission. It focuses on mental health education, healing practices, and cultural exchange for Chinese-speaking communities, including new migrants, international students, and transnational families. The platform includes three surfaces: the main website, a native iOS app, and a separately deployed course system. Which database each surface should use, how user data should be shared, and how the course system should be designed so it could later be deployed for different clients were not decisions engineers made for me. They were product decisions I had already worked through and written into the requirements.

But there was one thing I could not do: I could not see the database.

I wrote the requirements, engineers implemented them, and the system ran. But when I said that iOS profile updates should sync with the main website database, what exactly was executing that logic? When I said the course system should support independent deployment, what did that login path really look like?

This is a special position product managers face with data problems: you designed the architecture, but you are not inside the data flow.

My data architecture decisions, and why they were complex enough to matter

Soulful Asia has three connected surfaces: the main website, the course website, and the iOS app. Their data relationship was something I designed intentionally. It was not something that formed on its own.

My decision on the iOS data source was clear: the iOS app should use the main website user system for login, call the main website APIs, and not consume data directly from the course system. Users should only maintain one identity. They should not need to register separately in two systems. If they update their profile in the app, that information should sync back to the main website database.

At the product level, that decision was logically clean. But I did not know how engineers had implemented it. Which service handled profile updates? Where was the token stored? What was the difference between ordinary key-value storage and secure system storage on iOS, and which one had actually been chosen? Those were not my implementation tasks, but I still needed a way to verify them.

The course system introduced an even deeper product decision. From the beginning, I did not design it only for Soulful Asia. I wanted it to become a SaaS system that could later be deployed independently for other teachers or organisations managing their own courses. In that sense, Soulful Asia's course platform was the first client of a broader system.

But that created a real tension. As the first client, Soulful Asia should reasonably let users sign into the course system with the same main website account. That makes the experience smoother. But as a future SaaS product for other clients, the course system must also support independent deployment, its own user database, and no hard dependency on the Soulful Asia website.

If the first deployment depends on the main website SSO, future independent deployments cannot also depend on that same SSO. If the requirement does not state that boundary clearly, engineers may either over-couple the systems and make independent deployment hard, or separate them too aggressively and damage the first-client experience.

The database implementation is a technical question. But deciding what the product should do is still a product question. I needed to think it through clearly and describe it in language engineers could understand precisely.

I brought AI specific questions, not vague questions about data sync

I did not ask AI a broad question like "how should data sync work?" I brought very specific questions:

1. If the iOS app updates a user profile, which endpoint should that hit, and when could local cached data and server-side data become inconsistent?

2. If the course system uses the main website account system for login, what does that imply at the database and authentication level, and how can it still support independent deployment later?

3. Where is user data stored, how is it stored, and does that match my expectations for security and data separation?

That is very different from asking how to fix a sync problem after it appears. One approach reacts to a symptom. The other validates the architecture before the problem happens.

The most valuable part was not that AI gave me final answers. It helped translate product intent into technical questions that could actually be checked.

SaaS and SSO boundaries: how precise the product requirement needs to be

One of the most useful AI questions was simple: when I say the course system should support independent deployment, independent at what level? Only the user database? Or the entire authentication logic as well?

That question made me realise that my earlier phrase, "it should be independently deployable for different teachers," still left many possible technical paths open. Those paths have very different migration costs.

AI helped split the problem into two separate questions. First: in the Soulful Asia deployment, how should user identity be connected? The cleanest answer is for the main website to expose an authentication API that the course system can call to validate tokens, rather than letting the course system read the website user tables directly. That avoids a hard database dependency.

Second: where should authentication come from in future independent deployments? If the course system treats authentication as a replaceable module from the beginning, it can either use the main website API or switch to an independent user database later without rebuilding the whole product.

My original requirement said only "support independent deployment." It did not explicitly describe the need to decouple the authentication layer. AI helped me see that this missing layer of detail would force engineers to make their own judgment call, and their judgment might not match my product intent.

Verifying iOS data ownership: was my design actually implemented?

I had always believed that the iOS app should only consume main website data, not course-system data. But that was my design intent, not yet a verified fact.

AI helped me build a verification framework. Instead of asking, "Was it implemented this way?" I asked, "If it was implemented this way, what should I be able to observe?"

The iOS app should only call the main website backend. It should not hit course-system service addresses. A user profile update in the app should post to the main website user-update endpoint. Course cards displayed in the app should come from content maintained in the main website backend, not from the course system's own database.

The result of checking was yes: the iOS implementation matched my design intent. The app did not call course-system APIs. That data boundary was clean.

But this verification process also surfaced a question I had not focused on enough before: in which situations could the iOS app's local cache fall behind the server? Registration records and account information both use local caching. Caching is reasonable for performance and experience, but I still needed to confirm whether cache invalidation covered all the important user flows.

Credentials and derived data: a technical detail product managers can easily miss

When I reviewed logout logic, AI gave me a perspective I had not previously realised I should ask about: are you clearing only the token, or are you also clearing all the data derived from that token?

At first glance that sounds like a technical detail, but it maps directly to a product requirement: after logout, what level of data separation should actually be guaranteed?

A login token is a credential. The user profile, membership data, and registration records fetched through that token are derived data. If logout only clears the token but leaves the derived data on the device, the next person using that device might briefly see cached information from the previous account before signing in.

In the current implementation, logout clears the token, account JSON, membership data, and registration records, and also triggers server-side token revocation. That is a complete flow.

I did not learn that by reading code. I learned it because AI helped me break "a complete logout" into a technical checklist I could verify with engineers.

There was another related area where I had stated a security requirement but had not tracked the exact implementation choice: local account storage on iOS. My requirement was simply that user data should be stored securely. AI helped me see that if I stop at that level, engineers still have too much room to choose any storage method. The stronger requirement is explicit: sensitive account information must use secure system storage such as Keychain, not plain key-value storage such as UserDefaults.

A gap in my acceptance standard: API success does not equal user experience success

While reviewing the activity and journal pages in the iOS app, AI helped me identify a structural gap in my acceptance process.

My usual acceptance pattern had been simple: test the API, get a 200 response, confirm the data structure, and mark it as accepted.

Then AI asked a sharper question: if the activity posters or journal cover images are SVG files, have you actually verified that they render in the simulator?

I had not. My standard was only that the API returned the correct image URL.

But for an image to appear to the user, a whole chain has to work: the backend returns a URL, the app requests it, downloads it, decodes it, and renders it. If the image format needs extra rendering support and the framework does not handle it, the user sees a blank area even though the API looks fine.

That is not only an engineering problem. It means my own acceptance standard was incomplete. I had treated "the API works" as a proxy for "the user experience works," and those are not the same thing.

From that point on, the acceptance standard for remote image content changed from "the endpoint returned 200" to "provide a simulator screenshot that proves the image rendered correctly in the real app environment."

A feature I deliberately removed, and the product logic behind it

While reviewing the feature list, I noticed a feature that had been intentionally removed: user avatar upload and editing.

The reason had been documented very directly: remote avatar sync was too complex for the current management workflow.

That was my own product decision. But when I discussed it with AI, AI helped me structure the reasoning behind it in a way I could later explain clearly to a team or an investor.

An avatar looks small, but it introduces several data-ownership questions at once. Where is the avatar stored: locally in the app or on the server? If it is on the server, should it also appear on the main website profile page? Can an administrator replace it from the backend? If a user uploads it in the iOS app and the main website user system also has an avatar field, which one becomes the source of truth?

Each one of those questions is manageable on its own. Together, they mean defining a full lifecycle for avatar data across the iOS app, the website frontend, the website backend, and the admin system.

At this stage of the product, that cost was larger than the value. Removing the feature was not a deficiency. It was a clear scope decision: when a feature introduces more data complexity than the current system can reasonably absorb, not doing it is a sensible product decision.

What I learned from this process

  • Between product intent and technical implementation, there is an invisible layer, and product managers need a way to inspect it.
  • SaaS requirements need to be specified at the level of replaceable modules, not only high-level goals.
  • Security requirements should be written at the level of technical choice, not only intention.
  • Acceptance standards belong to the product manager. "The API works" is not the same as "the user experience works."

Final thought

I want to clarify one possible misunderstanding: this is not a story about a product manager using AI to compensate for not understanding technology.

When I designed this product's data architecture, those decisions were mine. I knew why the iOS app should use the main website database, why the course system should support SaaS-style independent deployment, and why certain features should be removed.

The real challenge came from a structural limit of the product-management role: you design the system, but you are not inside the data flow. You describe the intent, but you cannot directly see the implementation. You are responsible for the architecture, but you do not read the database.

AI's role here was to help me build a translation layer from product intent to verifiable technical properties, so I could validate whether my product decisions had actually been carried through correctly without needing to write the code myself.

For independent builders, people wearing multiple roles, or product decision-makers without deep engineering backgrounds, that is a very practical tool. It does not replace engineering judgment. It makes product intent less of a black box in front of engineering implementation.

More writing