Software & AppsTech News

Honeywell Enterprise Systems Architect Dinesh Kumar Garg on What Manufacturing Process Discipline Teaches Mental Health Software

An enterprise architect who has spent eighteen years building closed-loop Kanban systems and Oracle Advanced Supply Chain Planning deployments inside one of the world's largest industrial conglomerates spent two weeks evaluating mental health software prototypes — and concluded that the field is repeating mistakes that manufacturing engineering learned to avoid four decades ago.

There is a discipline of engineering that has nothing to do with software, and that exists in factories rather than data centers, but that has spent most of the last half century working out a question central to any system that delivers value to a human being on the other end of a process: how do you know whether the next step is the right one to take, how do you know whether the previous step actually worked, and how do you keep the whole thing from accumulating errors that nobody notices until the customer is already harmed by them. The discipline is called manufacturing process engineering. The practitioners who have spent careers inside it have learned, often painfully, that the answer to those questions is not a feature you add at the end. It is an architecture you build into the system from the first decision.

Dinesh Kumar Garg has spent eighteen years inside that discipline. As a Senior IT Manager and Enterprise Systems Architect at Honeywell, his practice has spanned Oracle Advanced Supply Chain Planning deployments, closed-loop Kanban implementations on the manufacturing floor, vendor-managed inventory systems built on top of IoT telemetry, and the kind of multi-year enterprise resource planning consolidations that produce measurable reductions in operational expenditure precisely because they are measured against operational outcomes from day one. When Hackathon Raptors invited him to evaluate seven projects from MINDCODE 2026 — an international 72-hour hackathon focused on software for human health — he encountered a category of system that had never been built by anyone with his background, and that, in his judgement, was suffering from exactly the structural problems that manufacturing engineering had to learn to solve.

Mental health software is being built right now by very talented people who have not had to operate a process under measurement,” Garg observes. “In manufacturing, the first thing you learn is that a process you cannot measure is a process you cannot improve, and a process you cannot improve will accumulate the same failures every cycle until somebody downstream has to absorb them. What I saw in this batch was a category of software that is delivering interventions to users without any of the feedback architecture that would let the team know whether the intervention worked. That is not a software problem. That is a process design problem, and it has a name in my field. It is called open-loop control.

The Closed-Loop Problem in Wellness Systems

A pattern Garg flagged repeatedly across his MINDCODE batch was the absence of what manufacturing engineers call closed-loop feedback. In a closed-loop Kanban system, a downstream station consumes a resource, signals upstream that the resource has been consumed, and the upstream station produces the next unit only in response to that signal. The signal travels both directions. The system is in continuous conversation with itself. When something goes wrong — a defect, a delay, a shortage — the signal carries the failure back to the point of origin, and the upstream station can respond to it before the next cycle compounds the problem.

“The submissions I reviewed had no closed-loop architecture,” Garg notes. “A user opened the app. The app delivered an intervention — a breathing exercise, a journaling prompt, a mood check-in, an AI-generated recommendation. Then nothing. The system had no mechanism to learn whether the intervention had been completed, whether the user had benefited from it, whether the user had been distressed by it, whether the next intervention should be different. The intervention went out. No signal came back. That is open-loop control, and any process engineer will tell you that open-loop systems accumulate error until the error becomes the dominant feature of the system.”

His recommendation in this domain was structural rather than cosmetic. Build the feedback path before you build the intervention path. Decide, for every action the system takes on the user’s behalf, what signal you need back from the user to know whether the action succeeded, partially succeeded, or made things worse. Build the channel for that signal as a primary feature of the system, not as an analytics afterthought. Treat the user not as a passive recipient of interventions but as the downstream station in a Kanban loop whose response is the only legitimate authorization for the next intervention upstream.

“In the factory, we would never deliver the next part to the next station without a confirmation that the previous part was consumed correctly,” Garg explains. “Because we know what happens if we do. We accumulate inventory we cannot use. We accumulate defects we cannot trace. We accumulate process drift that nobody can debug because nobody was watching the right metric. Mental health software is right now delivering interventions to users with no acknowledgment that the previous intervention even reached the user, much less helped them. That is the same architectural mistake the manufacturing world learned to stop making in the nineteen-eighties.”

The Bottleneck Theory of Mental Health Apps

Another pattern that Garg’s eighteen years of supply chain work made impossible to overlook was what he described, drawing from the theory of constraints, as the bottleneck illusion in wellness software design. The principle from manufacturing is uncomplicated. Every system has a single dominant bottleneck. Improvements made anywhere except at the bottleneck do not improve the throughput of the system. They only generate inventory upstream of the bottleneck and starvation downstream of it. The discipline of process improvement is, before anything else, the discipline of correctly identifying where the actual bottleneck lives.

“In the projects I scored, the bottleneck was almost never where the team was investing their effort,” Garg observes. “Teams were optimizing AI model accuracy. Teams were building elaborate animation libraries. Teams were tuning recommendation algorithms. None of these were the bottleneck for any user the product was designed to serve. The actual bottleneck in mental health software is whether the user opens the app on the third day. That is the constraint that determines whether anything else the team built ever delivers value. Almost no team had measured this constraint, and almost no team was investing engineering effort in relieving it.”

His observation was operationally sharp. A mental health intervention that is mathematically optimal but that the user never engages with on the third day is a mental health intervention that does not exist. The optimization effort that went into the model has been spent on a non-bottleneck. From a process engineering perspective, that effort is waste. It is not bad engineering in the abstract. It is misallocated engineering relative to the actual constraint of the system the team is trying to improve.

“The strongest submission in my batch was a team called Team Batman,” Garg notes. “What I gave them five out of five for impact was not the sophistication of the underlying logic. It was that they had clearly thought about the user’s journey on day three, on day seven, on day fourteen — about the moments where users typically disengage and what mechanism the product had to bring them back. That is bottleneck thinking. It is not glamorous. It does not show well in a hackathon demo. But it is the work that determines whether the rest of the product matters.”

Vendor-Managed Inventory and the Crisis of Clinical Capacity

A theme that Garg returned to throughout his evaluations was the question of how mental health software should be structured to interact with the limited and irreplaceable resource of human clinical capacity. In supply chain management, when a critical input is scarce and irreplaceable, the discipline that has emerged to handle it is called vendor-managed inventory. The customer does not order the input on demand. The supplier monitors the customer’s consumption telemetry and maintains the inventory at the customer’s location, replenishing it before the customer runs out. The discipline shifts the burden of managing the scarce resource from the consumer to the producer, who has the visibility and the incentive to manage it correctly.

“Therapists are vendor-managed inventory,” Garg observes. “There are not enough of them. They cannot be conjured into existence by a software product. If a mental health app is going to refer a user to a clinician, the question of whether a clinician will actually be available when the user shows up is a supply chain question, not a UX question. The teams I scored mostly treated this as an external problem. They built the referral feature. They did not build the architecture to know whether the referral would result in a clinician on the other end.”

His recommendation in this space was concrete in the way that eighteen years of vendor-managed inventory deployments at Honeywell make a person concrete. Treat clinical capacity as a measured input. Instrument the upstream — the calendars, the credentialing systems, the referral acceptance rates — and build the product around the actual availability rather than the theoretical availability. If a user is going to be told that help exists, the system that issued that promise has to know whether the help is in stock at the moment the promise was made. Otherwise, the product is making promises whose fulfillment depends on a supply chain it has not built and does not monitor.

“In manufacturing, we would never tell a customer the part is available without checking the inventory,” Garg notes. “We have learned what happens when we do. The customer plans around the promise. The promise turns out to be false. The customer’s downstream system fails because they trusted us. In mental health software, the user is the downstream system, and the cost of the false promise is not measurable in dollars. It is measurable in the user’s willingness to ever ask for help again. That is a stockout no engineering team should be willing to cause.”

Standard Work as a Safety Primitive

A subject Garg returned to repeatedly in his deliberation comments was the question of process documentation — what manufacturing engineers call standard work. Standard work, in a factory, is the documented description of how a process is supposed to be performed when it is performed correctly. It is not a manual for the operator. It is the baseline against which deviations are detected. Without standard work, the process has no defined correct state, and any drift away from correctness is invisible because there is no reference to drift away from.

“Most of the projects I evaluated had no standard work for the user’s journey,” Garg observes. “The team had built a path through the product, but the team had not documented what the path was supposed to look like when it succeeded. So the team had no way to detect that the user was deviating from the successful path. They could not see when a user was getting lost. They could not see when a user was repeating the same intervention without progress. They could not see when a user was disengaging from the part of the product that mattered most. The whole thing was running open-loop, and the absence of standard work was the reason.”

His recommendation here was to write the standard work first, before writing the code. Define what a successful three-day arc through the product looks like for the target user. Define what a successful seven-day arc looks like. Define what a successful three-month arc looks like. Then build the instrumentation to measure deviations from those arcs as they happen, not after the fact. The point is not to enforce the arcs — users are not assembly lines and should not be optimized as such — but to make the system honest about whether it is delivering what it promised, on the timeline it promised to deliver it on.

“The team called Taurus impressed me here,” Garg says. “What they had built was not the most technically sophisticated submission, but they were the only team that had a clear sense of what their product was supposed to do for the user across multiple sessions, and they had designed the product around that arc rather than around any individual interaction. That is standard work thinking, even if the team would not have called it that. It is the discipline that lets a system know whether it is succeeding.”

What the Strongest Submissions Had in Common

The submissions that scored highest in Garg’s batch shared a quality that his manufacturing process background made impossible to ignore. They had treated the user as the downstream station in a closed-loop process rather than as the endpoint of an open-loop intervention. They had identified, even if implicitly, where the bottleneck of their product actually lived and had invested effort in relieving it rather than in optimizing parts of the system that were not the constraint. They had approached the question of clinical referrals as a supply chain problem rather than as a feature flag. They had at least the beginnings of a defined successful path through the product against which to detect drift.

“The teams that produced systems I would feel comfortable seeing in production,” Garg notes, “were the teams whose architecture acknowledged that the product is one node in a longer process that includes the user, the clinical capacity behind the product, and the time that passes between sessions. The teams that produced systems I would not feel comfortable seeing in production had built impressive single-interaction experiences without the surrounding architecture that would let those interactions add up to anything. The first group was building wellness software. The second group was building wellness demos.”

His closing observation was deliberately practical. The disciplines of closed-loop control, bottleneck management, vendor-managed inventory, and standard work are not new. They have existed in industrial engineering for decades and have been deployed at scale by enterprise systems vendors like the ones he has spent his career inside. The reason they are absent from most mental health software is not that they are difficult or proprietary. The reason is that the people building mental health software have not yet been forced to confront the failures that taught the manufacturing world to want them. The cost of that absence is not yet visible in the way that a stockout or a recall is visible in a factory. But the cost is being paid every day by users whose interventions are not closing the loop, whose bottlenecks are not being relieved, and whose product is making promises it has not engineered to fulfill.

“My field had to learn this the hard way,” Garg reflects. “We learned it by building processes that failed in ways that hurt customers, and by deciding that we would not let it happen again. Mental health software is at a moment in its history where it can choose whether to learn the lesson from us before the equivalent failures start showing up in its own user base, or to learn the lesson the way we did. I would prefer the first option. The second option has costs that I do not think this field has yet imagined.”


MINDCODE 2026 — Software for Human Health was an international 72-hour hackathon organized by Hackathon Raptors from February 27 to March 2, 2026, with the official evaluation period running March 3–14. The competition attracted over 200 registrants and resulted in 21 valid submissions across the mental health and wellness domain. Submissions were independently reviewed by a panel of judges across three evaluation batches. Projects were assessed against five weighted criteria: Impact & Vision (35%), Execution (25%), Innovation (20%), User Experience (15%), and Presentation (5%). Hackathon Raptors is a United Kingdom Community Interest Company (CIC No. 15557917) that curates technically rigorous international hackathons and engineering initiatives focused on meaningful innovation in software systems.

5/5 - (6 votes)

Back to top button