Building an Industrial IoT Platform with .NET and Azure
Feb 6, 2026
Over five years at HUSS B.V., I designed and built an industrial IoT platform that collects data from manufacturing equipment, processes it in near-real-time, and presents it through reporting dashboards. What started as a prototype grew into a production system serving multiple manufacturing clients. Here is what I learned along the way.
What Does an Industrial IoT Platform Look Like?
An industrial IoT platform connects physical machines on a factory floor to software that makes their data useful. At its core, it is a pipeline: devices generate telemetry, that telemetry gets ingested and processed, then stored and visualized. The goal is to give manufacturing operators and managers insight into equipment performance, production output, and anomalies - without them having to walk the floor.
Our platform at HUSS followed a layered architecture:
- Device layer - PLCs and sensors on manufacturing equipment, pushing data via MQTT or HTTP
- Ingestion layer - Azure-hosted endpoints receiving telemetry, handling authentication and throttling
- Processing layer - .NET Core workers normalizing, validating, and enriching incoming data
- Storage layer - SQL Server for structured telemetry and metadata
- Dashboard layer - Blazor Server application with real-time and historical reporting views
Each layer was independently deployable. That separation was not there from day one - it emerged after the first year when we realized tightly coupling ingestion to processing created bottlenecks during peak data bursts.
Choosing the Right Azure Services for IoT
Picking the right ingestion and messaging services was one of the earliest - and most consequential - decisions. Azure offers several options, and the trade-offs are not always obvious until you are knee-deep in production traffic.
Here is how we evaluated them:
| Service | Strengths | Limitations | Our Verdict |
|---|---|---|---|
| Azure IoT Hub | Device management, per-device auth, twin state, cloud-to-device messaging | Higher cost at scale, complexity for simple telemetry-only scenarios | Used for devices needing bidirectional communication |
| Azure Event Hubs | High throughput, simple producer model, cost-effective for telemetry-only streams | No device management, no cloud-to-device messaging | Primary ingestion path for high-volume telemetry |
| Custom REST endpoints | Full control, easy to debug, fits existing device firmware | You own reliability, scaling, and backpressure entirely | Used for legacy devices with HTTP-only firmware |
We ended up with a hybrid. Event Hubs handled the bulk of telemetry ingestion because most of our devices only needed to push data upstream. IoT Hub was reserved for devices where we needed firmware updates or remote configuration. A handful of older machines could only do plain HTTP POST, so we ran a small .NET Core API in front of Event Hubs for those.
The lesson: do not default to the most feature-rich service. Match the service to the device capability and communication pattern.
Why Blazor for the Dashboard
We chose Blazor Server for the reporting dashboard because it let us build a rich, interactive UI while keeping the entire stack in C# and .NET. For a small team - sometimes just me - that consistency mattered more than any framework benchmark.
Blazor Server worked well for our use case specifically because:
- Real-time updates via SignalR came almost for free. When new telemetry arrived, dashboards reflected it within seconds without polling.
- Server-side rendering meant we did not need to expose APIs for the frontend to consume. The dashboard queried the database directly through services.
- Shared models between the backend processing pipeline and the dashboard eliminated an entire class of serialization bugs.
The trade-off was latency sensitivity. Blazor Server requires a persistent WebSocket connection, and if the user’s network hiccupped, the UI froze. For factory floor kiosks on wired ethernet, this was fine. For managers checking dashboards from hotel Wi-Fi - less fine. We mitigated it with reconnection logic and a loading state that was honest about what was happening.
Would I pick Blazor again today? For an internal or B2B dashboard with a known user base, yes. For a public-facing consumer product, I would look harder at a decoupled SPA.
Handling Unreliable Device Connectivity
Manufacturing environments are hostile to network connections. Metal enclosures, electromagnetic interference from heavy machinery, and facilities where running new cable is a six-month procurement process. Designing for intermittent connectivity was not optional - it was the baseline assumption.
Our approach had three pillars:
- Local buffering on devices. Each device agent stored telemetry locally and forwarded it when connectivity resumed. Messages included timestamps from the device clock, so late-arriving data slotted into the correct time window.
- Idempotent ingestion. Every telemetry message carried a unique ID. The ingestion layer deduplicated on insert, so replayed messages from reconnecting devices did not corrupt aggregates.
- Health monitoring with absence detection. Rather than only alerting on bad data, we alerted on missing data. If a device that normally reports every 30 seconds went silent for 5 minutes, that triggered an investigation.
The absence detection turned out to be more valuable than any anomaly detection on the data itself. A machine producing weird numbers is concerning. A machine producing no numbers usually means something is actually wrong.
Why SQL Server for Time-Series Data
This is the decision that raises the most eyebrows. Time-series databases like InfluxDB or TimescaleDB exist for exactly this kind of workload. We chose SQL Server anyway because it was already in the stack, the team knew it well, and for our data volumes it performed more than adequately.
Our telemetry volume was in the tens of millions of rows per month - significant but not enormous by time-series standards. With proper indexing, table partitioning by date, and a scheduled job that rolled up raw data into hourly and daily aggregates, SQL Server handled queries against months of data in under a second.
The practical advantages were real:
- Familiar tooling. Entity Framework Core, SQL Server Management Studio, and well-understood backup and restore procedures.
- Joins across domains. Telemetry data alongside device metadata, client configuration, and user preferences in the same database - no cross-system queries.
- Operational simplicity. One database engine to monitor, patch, and tune instead of two.
If our volume had been 10x higher or if we needed sub-second queries across years of raw data, a dedicated time-series store would have been the right call. Know your data volumes before reaching for specialized infrastructure.
Scaling from Prototype to Production
The prototype was a single Azure App Service running everything - ingestion, processing, storage access, and the dashboard. It worked for three devices. It did not work for thirty.
The biggest changes from prototype to production were not about code - they were about operations. Specifically:
- Infrastructure-as-code with ARM templates replaced manual Azure portal configuration. Every environment - dev, staging, production - was reproducible from a template. No more “it works in my environment” when the answer was a missing app setting.
- CI/CD pipelines in Azure DevOps automated build, test, and deployment. Pull request builds caught issues before they hit any shared environment. Release pipelines promoted through stages with approval gates.
- Structured logging and Application Insights replaced console.log-style debugging. When a device’s data stopped appearing in dashboards, we could trace the full path from ingestion through processing to storage and find exactly where it dropped.
- Separate scaling for ingestion and dashboard. The ingestion workers scaled based on Event Hub partition lag. The Blazor dashboard scaled based on active WebSocket connections. They had completely different load patterns and needed independent scaling rules.
Setting up this operational foundation took real time - weeks, not days. But every hour invested paid back tenfold when something went wrong at 2 AM and the monitoring told us exactly what and where.
Lessons Learned Over Five Years
Five years on a single platform teaches you things that no greenfield project can:
- Schema migrations on live telemetry tables are terrifying. Plan your data model with extension in mind from day one. We added a JSON metadata column early on that saved us from dozens of schema changes later.
- Device firmware is the hardest thing to update. Your cloud code can deploy in minutes. Firmware on a machine in a factory behind a customer’s firewall can take weeks to roll out. Design your protocol to be backward-compatible always.
- Monitoring is a feature, not overhead. The dashboards we built for our clients’ machines? We needed the same thing for our own platform health. Treat observability as a first-class product requirement.
- Start with fewer Azure services. Every managed service adds a billing line item, a failure mode, and something to learn. Add complexity only when a concrete problem demands it.
Frequently Asked Questions
What programming language is best for industrial IoT platforms?
C# with .NET Core is a strong choice for industrial IoT platforms because of its performance, strong typing, and deep integration with Azure services. The ecosystem provides libraries for MQTT, HTTP, and message queue protocols out of the box. That said, the best language is the one your team can hire for and maintain long-term - Python and Go are also common in this space.
How do you handle time-series data without a time-series database?
SQL Server can handle time-series workloads effectively at moderate scale - tens of millions of rows per month - with proper table partitioning, date-based indexing, and pre-computed aggregation tables. The key is to separate raw data retention from reporting queries. Raw data gets rolled up into hourly and daily aggregates, and most dashboard queries hit the aggregate tables rather than scanning raw telemetry.
Is Blazor suitable for real-time IoT dashboards?
Blazor Server is well-suited for real-time IoT dashboards in controlled network environments. Its built-in SignalR connection provides push updates without polling, and keeping the rendering server-side means the dashboard has direct access to backend data without an API layer. The main limitation is its dependency on a stable WebSocket connection, which makes it less ideal for unreliable mobile networks.
What Azure services do you need for an IoT platform?
A minimal Azure IoT platform needs an ingestion service - Event Hubs for telemetry-only or IoT Hub for bidirectional communication - a compute layer like App Service or Azure Functions for processing, a database for storage, and Application Insights for monitoring. Start with these four building blocks and add services like Stream Analytics or Time Series Insights only when a specific requirement justifies the added complexity and cost.
How do you ensure data reliability with unreliable factory network connections?
Reliability in factory environments requires a three-part strategy: local buffering on the device so data survives connectivity gaps, idempotent ingestion on the server so duplicate messages from reconnections are harmless, and absence-based monitoring that alerts when expected data stops arriving. The device-side buffer is the most critical piece - no amount of cloud-side engineering can recover data that was never stored locally.