top of page

WHAT MAKES SOFTWARE “INTELLIGENT” ?

  • Bruce Mullan
  • Mar 18
  • 2 min read

Ever wondered what makes a system intelligent? How would you know your system uses Ai? For years, vendors used the term “smart” in their marketing strategies. The distinction between smart, intelligent and everything else is now critically important in 2026.


Traditional systems provide a fixed set of features and functions. Ai systems evolve. Such evolution enables machines to perceive their environment and use their intelligence to take actions that maximize their chances of achieving a defined business outcome. Automated invoice processing is an example where a machine watches and learns, and, eventually replicates this human task.


Invoice automation is also a good example where Ai has filtered into general applications over the last 15 years, often without actually being called Ai!


Let’s go back in time, to see how we got to here.


Academics started officially researching Ai back in 1958. Over 50 years later, the first big breakthrough came in 2012 when new graphics processing Ai systems could outperform traditional Ai. In the 2020s, came the AI boom with generative Ai such as ChatGPT or Claude. Unfortunately generative AI's ability to create and modify content has led to several unintended consequences and harms. 


This new Ai capability has fuelled regulatory policies to ensure the safety and benefits of the technology. Not unlike, Work Health and Safety, government regulation was strengthened after major safety incidents and deaths in the Construction industry. 


DTA’s definition of an Ai system:


An Artificial Intelligence (AI) system is a machine-based system that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments. Different AI systems vary in their levels of autonomy and adaptiveness after deployment.


The implication is this defines whether or not the new technical standards apply to any IT system. 


Ai systems can be pre-built as commercially available solutions and others can be internally developed. In either case, the technical standards still apply. 


So the next important question is: who is responsible for an Ai system’s failures when something goes wrong? The customer or the vendor or both?


We’ll cover that topic in a future post about contracting with Ai vendors. 


For now, you may be using an Ai system or you may be exploring opportunities with new Ai technologies. 


Organisations must prepare for heightened responsibility to meet the technical standards across a system’s lifecycle. 


The second critical question: is who’s responsible if something goes wrong?


Contact us to learn more about how we help implement safe and responsible Ai governance practices by emailing us at info@aigovernancepartners.com.au or calling 1300 69 70 40.

 
 
 

Comments


bottom of page