A data-centric OS will replace apps.

If code for UIs can be generated on-the-fly, apps are irrelevant. The DataOS is the last app you will ever need.

The age of AI that is capable of generating user interfaces in real-time will lead to the end of the ‘application’ dominated era of computing. The next iteration of operating systems will resemble a cross between digital personal assistants akin to Siri and Alexa, and a web -- accessed through freeform user interfaces that are generated on-demand to display content relevant to the user. I imagine that this form of operating system will take input from the user -- likely verbally -- and search an open data lake against it. The AI would then generate an appropriate user interface in real-time to show the data to the user in the manner that they prefer.

For example, instead of having separate YouTube and TikTok apps, the user would simply ask 'show me some [short and] funny videos'. This would lead to a search of the open data lake, returning a series of new/novel/enjoyed matches. The AI will then assess the types of data it has retrieved, and its memory of the user's preferences, and render an appropriate UI for the dataset. The user might let the AI know verbally or through their actions what they liked or didn't like about the experience, leading to improvements next time.

Because user interfaces are now arbitrary and generatable in real-time, this class of operating systems will be fundamentally 'data first'. User interfaces can change from day-to-day in response to the user's whims, and the types of data people are uploading to the open dataset could also change organically.

Why does this matter?

There are three fundamental benefits to this system that I currently foresee. I list them from most trivial to 'deepest' in nature of change:

  • Display Flexibility: The proposed OS would allow ultimate flexibility and customization, with zero effort from the user.
  • Radically Flexible Features: By decoupling data from any specific app and supplying a method for the user to request any new feature that they want, on-the-fly, we will allow for entirely free-form and felxible experiences. These experiences will likely morph extremely frequently, and in a lightweight fashion, as the crowd experiments with different versions of the core concepts. By sharing an open data layer with compatible tagging, these experiences will exist seamlessly next to one-another.
  • Unlocking The User From The Attention Economy: The present browser and OS model is to allow users to decide which applications that they want to open, and to render the instructions sent from the service for them. This generates extremely strong incentives for the developers to build UIs and experiences that maximize the amount of time that users spend inside the app on their machine. This is typically motivated by the fact that time in the app linearly correlates with the number of ads that the user sees, and thus revenue that the company offering the service makes. In the case of DataOS this would not happen -- the platform (whether OS or browser) would act on behalf of the user to optimize the experience for them. It will, in the language of the HTTP specification, fulfill the role of being the User-Agent: A machine that traverses data on the web, working for the maximal benefit of the user. We should be able to pair this with the atomic asset ownership and incentive systems that we have been developing in order to maintain -- and even strengthen -- the incentives for creators to create and contribute content.

Present-day feasibility

The key unlocking technology to make this happen appears to have reached MVP-level capabilities this last week. ChatGPT is now able to generate code fragments that fulfil a remarkably deep set of tasks. In the case of the DataOS, we would really only need the AI to be able to build simple (React?) components, responding to the data that is relevant for the user's query. ChatGPT should already be able to achieve this, but we will be able to swap out the 'engine' with smarter (and hopefully more open) AIs in the future.

In terms of architecture, the AI will need to be fine-tuned to take requests from the user and to address its output to other elements of the system, which could in turn respond to the AI, allowing it to move on to fulfilling other parts of the request.

For example, a conversation the AI may have with the DataOS could look as follows:

Prompt> User request: Please show me some short, funny videos about cats.

Response> GraphQL request: query {
    ...
}

Prompt> GraphQL response: [List of relevant TXIDs, with their tags]

Response> Render UI:

<div>
    ...
</div>

Data:

[
    {
        title: "...",
        video_src: "[TXID]"
    }
]

The DataOS system would then take this formatted output from the AI and user it to render the components (as iframes, etc) on the UI. The AI could also be finetuned to add stamping and tipping functions as necessary to the components, in order to ensure that rewards flow back to the creators of content that is displayed.

While it seems likely that this systme will eventually replace the need for traditional operating systems, in the same way that browsers have encroached on that space (see ChromeOS, etc), the 'DataOS' could first be built as an app that the user installs onto their devices. This radically simplifies the go-to-market approach for the project, without curtailing its long-term trajectory.