The physical world — factories, farms, buildings, infrastructure, and the emerging class of autonomous machines — generates an inconceivable volume of sensor data, yet the vast majority of it is trapped behind proprietary protocols, siloed historians, and bespoke integration layers that cannot talk to one another. This paper argues that the next great infrastructure challenge of the 21st century is not compute or connectivity, but abstraction: the creation of a universal software layer that makes the physical world as legible and programmable as the digital one. We describe the sources and consequences of today's sensing fragmentation, propose the architecture of a planetary sensing stack, and explain why the convergence of industrial automation, robotics, and intelligent machines makes this the right moment to establish open standards. We close with a vision of what becomes possible when every probe, every sensor, and every actuator in the built environment speaks a common language.
We live, in 2026, in a world of two very different types of software infrastructure. On one side: the digital stack. The internet, cloud computing, open APIs, containerisation, and modern programming frameworks have built a layer of abstraction so effective that a developer in São Paulo can write a web application that serves users in Seoul, processes payments via Stripe, sends emails through SendGrid, and stores data in a database in Virginia — without understanding the physical infrastructure beneath any of it. Digital systems have achieved extraordinary interoperability through decades of accumulated standards: TCP/IP, HTTP, JSON, OAuth, gRPC, and hundreds more. The result is a composable, programmable digital world where almost anything can communicate with almost anything else.
On the other side: the physical stack. The real world — the factories that manufacture our goods, the buildings that house us, the infrastructure that moves our water and electricity, the machines that grow our food — is instrumented with a different kind of software infrastructure. It is not composable. It is not programmable in any unified sense. It is a patchwork of protocols and standards that accumulated organically over the twentieth century, each solving a local problem without reference to the broader ecosystem that would eventually surround it.
This paper is about the gap between those two worlds — and how to close it.
Industrial sensing was not designed as a unified system. It accumulated in waves, each driven by a specific engineering problem, and each producing its own protocol that became entrenched before the next wave arrived. The result is that a modern industrial facility is not built on a sensing stack — it is built on a sensing museum.
Enter a mid-sized automotive manufacturing plant built between 1985 and 2010. In the basement, you will find 4–20 mA current loops — a pneumatic standard from the 1950s adapted for electricity, still in use because it is electrically robust and requires only two wires. In the process control rooms, HART rides on top of those same 4–20 mA loops, adding digital communication to legacy installations at the cost of a 1200 baud modem from 1981. On the production floor, Profibus connects PLCs to drives and I/O modules at speeds adequate for 1990s cycle times. In the newer sections, EtherNet/IP and PROFINET provide gigabit connectivity but remain incompatible with each other and everything older. In the machine tool area, proprietary serial connections link CNC controllers to their drives, each from a different decade and manufacturer. On the robots, CANopen or EtherCAT runs between the controller and motor drives. In the building management system above all of this, BACnet or LonWorks controls HVAC and lighting in a format that cannot natively speak to any of the process sensors below.
Each of these protocols was, in its time, a reasonable solution to a specific problem. The tragedy is not that they exist — it is that they all exist simultaneously, in the same building, and that integrating them requires specialists in each individual standard, custom hardware adapters, and expensive system integration work that must be re-done every time a sensor is replaced or a new production line is added.
Beyond communication protocols, there is a second layer of fragmentation: calibration. Every physical sensor drifts. Temperature coefficients, pressure non-linearity, humidity sensitivity, ageing — every probe has a unique response curve that must be characterised at manufacture and tracked over its service life. In most installations, this calibration data lives in a PDF on a server somewhere, cross-referenced by a serial number on a sticker on the probe. When the probe is replaced, the calibration data must be manually re-entered into the historian or DCS. When it is not — and often it is not — the platform continues using the old probe's calibration coefficients for the new hardware. The measurement is wrong. Nobody notices until an audit or a product quality incident forces the issue.
The fragmentation described above is not merely an inconvenience for systems integrators. It has measurable, significant economic and safety consequences.
A McKinsey analysis of industrial IoT initiatives found that 40–60% of project costs are consumed by integration work — connecting sensors to platforms, normalising data formats, building custom adapters between incompatible systems [1]. This cost does not create value; it merely compensates for the absence of a common standard. A factory manager who wants to add vibration monitoring to twelve motors must commission custom engineering work for each protocol and historian combination in their facility, even if the physical installation is trivial.
Because most industrial sensing data is siloed in per-system historians — the DCS has its own historian, the SCADA system has another, the building management system has a third — cross-system analysis is either impossible or requires expensive middleware. Predictive maintenance programmes, which depend on correlating mechanical, thermal, electrical, and environmental signals across a machine's complete operating history, are severely compromised when a third of the relevant signals are inaccessible from the analytics platform.
Legacy protocols were designed in an era when industrial networks were physically isolated. HART was designed before the internet existed. Modbus has no authentication. Profibus has no encryption. As industrial networks have become connected to corporate IT networks and to the cloud, these protocols have introduced attack surfaces that were never contemplated in their design. The OT security incident at a Florida water treatment plant in 2021, the Triton/TRISIS malware campaign targeting safety instrumented systems, and dozens of lesser-reported incidents all exploit the gap between OT protocol naivety and IT-level connectivity [2].
The transformation of software development over the past five decades offers a template for what we are proposing for physical sensing. In the 1970s, every significant software system required bespoke hardware support — you could not move a program from one machine to another without porting it, because each machine had a different instruction set, a different operating system, and a different way of talking to peripherals. The idea of writing software once and running it anywhere was not just impractical — it was philosophically alien to how systems were designed.
The emergence of UNIX, the C standard library, and eventually the internet TCP/IP stack changed this completely. By agreeing on standard interfaces at each layer — the operating system provides POSIX file descriptors; the network provides TCP sockets; the application server provides HTTP requests — developers could write software that was genuinely portable. The payoff was not just convenience: it was an explosion in the rate of software innovation, because every new application could build on the accumulated abstractions below it rather than re-implementing them from scratch.
This is precisely what is missing from physical sensing today. There is no POSIX for sensors. There is no TCP/IP for measurement data. Each new integration project must re-implement the full stack from physical interface to data normalisation to cloud ingest — and then re-implement it again for the next sensor type, the next protocol, the next historian.
A planetary sensing stack is an end-to-end software architecture that provides the same kind of composable, programmable abstraction for physical sensing that the internet stack provides for digital communication. It must solve four problems simultaneously: identity, calibration, interoperability, and security.
The temptation for any company building sensing infrastructure is to build a closed ecosystem — to make the protocol proprietary, lock in the hardware, and extract value from the integration moat. We believe this is both strategically wrong and ethically indefensible. It is wrong because the network effects of an open standard dwarf those of a proprietary one: every probe manufacturer that implements PDP makes every PDP gateway more valuable. It is indefensible because the physical sensing problem is global infrastructure, not a product category — it should not be owned by any single company.
PDP is therefore published as an open standard under CC BY 4.0. Any hardware manufacturer may implement it without paying royalties. Any software developer may build on the API without approval. The only thing Continuis Labs asks is attribution and adherence to the conformance test suite.
The sensing stack needs more than a wire protocol. Calibrated measurements flowing from probes to a gateway are raw material — valuable, but not immediately useful without a platform that stores them efficiently, makes them queryable at arbitrary time scales, evaluates alert rules against them in real time, and exposes them to AI and analytics systems through standard APIs. The Vigil platform provides this layer: a time-series database (TimescaleDB) optimised for sensor data at arbitrary sample rates, a real-time alert engine with pluggable notification delivery, and a natural language query interface (Argus) that allows operators to ask questions of their sensing history in plain English.
We are writing this paper in April 2026. We believe the next three years represent the most important window in the history of industrial sensing — a convergence moment driven by three simultaneous trends that create, for the first time, both the technical possibility and the economic urgency to establish a universal sensing standard.
The combination of transformer-based AI, edge ML runtimes (TensorFlow Lite, ExecuTorch), and ARM Cortex-M55 processors with dedicated ML acceleration has made it possible to run meaningful inference workloads on microcontrollers that cost under $5. This means that probes themselves can now run anomaly detection models, feature extraction algorithms, and edge aggregation — not just raw data streaming. The implication for sensing architecture is profound: the probe is no longer a dumb transducer but a computational participant in the sensing pipeline. The PDP streaming tier must evolve to accommodate compressed feature vectors, model inference results, and compressed waveform representations alongside raw samples.
The energy transition — solar, wind, battery storage, EV charging infrastructure, smart grids — is creating enormous demand for sensing and monitoring at scales that were previously confined to heavy industry. A solar farm requires thousands of module-level power optimiser readings; a grid-scale battery installation requires cell-level temperature and impedance monitoring on millions of individual cells; a highway EV charging network requires real-time power quality monitoring at thousands of stations. These use cases do not fit neatly into traditional industrial sensing architectures, which were designed for fixed-location, relatively slow-moving processes. They demand sensing infrastructure that is cheap enough to deploy at enormous scale, secure enough to operate on public networks, and flexible enough to handle wildly different measurement types.
Perhaps the most consequential convergence driver is the rapid maturation of humanoid robotics. A modern bipedal robot — whether used in warehousing, manufacturing, healthcare, or field operations — carries between 200 and 400 sensors: joint torque sensors, force/torque wrists, tactile skins, IMUs, cameras, microphones, lidar, proximity sensors, and dozens of temperature and current monitors for thermal management. Each of these sensors currently uses a proprietary protocol defined by the robot manufacturer's chosen component supplier. There is no standard for robot sensing. Each new robot platform requires the same integration effort that has plagued industrial sensing for decades, simply applied to a walking machine rather than a stationary one.
We believe humanoid robotics represents both the greatest near-term opportunity and the greatest long-term leverage point for the planetary sensing stack. Here is why.
A humanoid robot's sensing system is, architecturally, a factory in miniature: dozens of incompatible buses, each serving a different subsystem, with no common data model. A Unitree H1, for example, uses EtherCAT for actuator control, USB for cameras, a proprietary serial bus for its IMU, and GPIO for tactile sensors. A Boston Dynamics Atlas uses different buses from a different set of vendors. Agility Robotics' Digit uses yet others. If you want to write a fault detection algorithm that correlates joint temperature with motor current with foot contact force — a straightforward predictive maintenance query on a walking machine — you must first build the integration layer to unify those three different buses into a common data model. That work is not robotics; it is plumbing.
A PDP-compliant robotic sensing bus would change this fundamentally. Imagine a joint actuator module that carries its own TEDS EEPROM describing its torque sensor, temperature sensor, and position encoder — three channels, one EEPROM, one PDP identity. The robot controller reads the EEPROM on startup, receives calibration coefficients for all three channels, and begins receiving calibrated torque, temperature, and position data on a single standardised bus. When the actuator is replaced after wear, the new actuator's calibration coefficients are automatically applied. When a new robot model uses the same actuator module, no re-integration is required.
The platform implications are even more powerful. A fleet of PDP-compliant humanoid robots working across multiple facilities would stream their sensing data into a shared Vigil instance, enabling: fleet-wide predictive maintenance (identify actuator wear patterns across thousands of units before failure); cross-facility operational comparison (identify which sites and workflows produce the most joint stress); and AI training data collection at the sensing layer (calibrated, timestamped, fully attributed sensor data for training locomotion and manipulation models).
PDP is not only a sensing protocol. The PeripheralClass taxonomy includes ACT (actuators), DISPLAY (HMI peripherals), SENSE_ACT (transducers that both sense and actuate, like piezo devices), and BRIDGE (protocol translators). A PDP actuator module uses the same TEDS EEPROM and authentication model as a sensor, but its channel table describes command inputs rather than measurement outputs. This enables bidirectional control loops within the same unified protocol — closing the gap between sensing infrastructure and actuation infrastructure that currently forces robotic systems to maintain two completely separate bus architectures.
The case for a planetary sensing stack is ultimately not about solving today's integration problems — it is about enabling capabilities that are currently impossible.
If every piece of rotating equipment in every factory runs a PDP-compliant vibration sensor, and all of those sensors stream calibrated data into a shared (but appropriately permissioned) time-series platform, it becomes possible to build predictive models that learn from failure patterns across an entire industry rather than a single facility. A bearing failure signature detected in one factory can inform alert rules at every other factory running the same equipment, within hours. The collective intelligence of the sensing network becomes a shared asset for the industry as a whole.
When sensing, actuation, and platform intelligence are unified in a single stack, facilities can begin to adapt themselves. An HVAC system that knows not only the current temperature readings but the predicted occupancy, the external weather forecast, the production schedule, and the energy tariff can optimise its operation continuously without human involvement. A compressed air system that knows the consumption pattern of every connected tool can detect compressor degradation before it causes a pressure drop. These are not futuristic scenarios — they are immediate consequences of having unified, calibrated, continuously-streaming sensing data.
The furthest horizon of the planetary sensing stack is the built environment itself becoming programmable. Today, a building is instrumented with dozens of disconnected systems — BMS for HVAC, access control for doors, fire detection, power metering, occupancy sensing — each generating data that cannot speak to the others. A sensing stack that unifies these into a common data model, with common identity and common APIs, transforms the building from a collection of systems into a single programmable entity.
The same transformation applies at city scale: street lighting, water distribution, traffic management, environmental monitoring, and emergency response systems all generate sensing data that, if unified, would enable a level of urban intelligence that is currently impossible. This is not science fiction — it is an engineering problem, and the engineering problem is a protocol problem. Solve the protocol problem, and the urban intelligence follows.
The planetary sensing stack will not be built in a single engineering effort by a single company. It will be built, like the internet, through the accumulation of open standards, reference implementations, and the network effects that follow from adoption. Continuis Labs is not trying to own the physical sensing stack — we are trying to start it.
We have published the Peripheral Detection Protocol (CLR-001) as an open standard, with reference implementations in Python and C. We have built the Vigil platform as a reference implementation of the platform layer, demonstrating that the full stack — from probe EEPROM to AI query interface — can be built on open-source components. We are building the MNEMOS Node hardware platform as a reference implementation of the gateway and probe hardware, enabling sensor manufacturers to adopt PDP without designing from scratch.
We need hardware manufacturers to implement PDP in new sensor products and to contribute TEDS descriptors for their existing product lines. We need systems integrators to build PDP bridges for legacy protocols — HART, Modbus, IO-Link, EtherCAT — so that existing instrumented facilities can join the stack without wholesale replacement of their sensors. We need robotics companies to evaluate PDP as a candidate for their internal sensor bus architecture. And we need standards bodies — IEC, IEEE, ISA — to review PDP for potential incorporation into their own standards families.
We expect PDP to reach stable release (v1.0) in Q4 2026, following a 90-day public comment period on this and the accompanying technical specification. The Vigil platform will reach general availability in Q1 2027. The MNEMOS Node hardware will begin production samples for early access partners in Q3 2026.
The planetary sensing stack is a decades-long project. But every decades-long project starts with a protocol decision — and protocol decisions, once made, are very hard to unmake. We are asking the sensing industry to make the right decision now, while the field is still open. The cost of acting is low; the cost of continued fragmentation is measured in trillions of dollars and billions of avoidable equipment failures.