AI here, there, and everywhere
Atomic programming harnesses AI at every level. Atom54 leverages AI directly, and also facilitates the usage of AI through chaining various agents and orchestrating seamless collaboration between AI and traditional software.
Before delving into details, please review our previous article about the core concepts of Atomic software and the composition of Atomic components, if you haven’t already:
Atom54 software is modular and composable. This architecture not only allows, but also embraces the use of AI in all the software layers:
Atoms use AI
An individual Atom can use AI to perform the standard regression, clustering, classification, and other operations:
Atoms co-authored with AI
AI can be used to co-author the code of an Atom. Atoms are usually relatively small and well-scoped code units, well suitable to be generated by an LLM.
AI-powered reality interfaces
Atoms also leverage AI to interaction with reality interfaces, such as: vision, sound, speech, touch, motion, and more:
One of the areas of technology with huge potential and undelivered promise is the wearable, AR, VR, IoT spaces. All these involve multi-device scenarios and much closer interaction with physical reality than the classical use cases addressed by software. These areas didn’t get enough developer community engagement simply because it is hard to develop for these devices. This is where Atom54 can make a big difference - not only it facilitates multi-device experiences, it does it in a general-purpose, device agnostic way.
AI-driven composition
And last but not least, composition is always semantic, as it is defined by connecting inputs and outputs of the components:
The LLMs are well positioned to reason about the composition and create novel experiences.
To learn more about Atom54, please visit our website, repository, read our blog or get in touch.