How An AI Ethics Assessment Could Have Prevented The Robodebt Disaster.

July 7, 2023

Introducing the Robodebt Disaster – A Brief Overview

Robodebt, also known as the Robodebt system, was an automated debt recovery system implemented by the Australian government. It was designed to identify and collect overpaid welfare benefits, primarily targeting recipients of the Centrelink program. The system used income data from the Australian Taxation Office to match it with income reported by recipients to Centrelink, and if discrepancies were found, debt notices were automatically generated and sent to individuals, often without proper human intervention or review. The Scheme requested people repay debts to the government for overpayment they had received in the past. 794,000 debts were raised against 526,000 people(1) (that is on average 2 out every 100 people in Australia).

The Robodebt system faced significant controversy and criticism for its flawed methodology, which resulted in incorrect debt calculations and unfair debt collection practices. Over time more and more recipients of debt collection processes began to raise their concerns through the agency itself and through administrative appeals processes. People were severely distressed over the claims being made against them and some people sadly took their own lives.

The system first started claiming debt in 2015/16. In 2020, the Australian government announced the discontinuation of the Robodebt program acknowledging the flaws and the negative impact it had on vulnerable recipients. Of the debts raised against people $746 million will be repaid to people who made payments and an estimated $1.751 billion will be written -off(1).

Although the Scheme was intended to generate $4.722 billion in savings, after costs the Scheme only generated a savings of $406 million(1). It is important note that this $406 million in savings will likely be completely wiped out by the wider costs to families and society in general and their trust in the government and the cost of implementing the 57 recommendations made in the Royal Commission into the Robodebt Scheme Report.

The Need for an AI Ethics Assessment

Given the Robodebt system’s disastrous implications, it is clear that a rigorous ethical assessment of any Automated Intelligence (AI) decision-making process should be mandatory before and during implementation.

Ethics is about deliberation on how we decide what is right and wrong. AI ethics is the application of moral principles to decisions made by machines - in other words, it's the system and supporting processes for ensuring that automated decision-making systems do the “right” thing.

An AI Ethics Assessment Process(6) involves examining a range of ethical considerations such as the impact to humans and their value systems and addressing ethical requirements established under four key areas of transparency, accountability algorithmic bias, and privacy before the implementation of an AI.

Ethical Concerns

The following key ethical concerns were found by reviewing the detailed findings set out by The Royal Commission in the Robodobt Scheme(1).

  • Automated bias system processes drove poor human outcomes
  • Staff were ill equipped to deal with the concerns being raised by distressed impacted citizens.
  • The privacy and context of the data associated with humans was not well managed over time
  • Concerns and issues raised were not taken into consideration
  • Accountability was lacking as stakeholders and processes were not well documented or understood
  • Communication and clarity of process was insensitive to human needs

The following assessment of ethical concerns and how an AI ethics assessment would have helped prevent the issues form occurring is representative of what the outcome would be and is not a complete AI ethics assessment.  a complete AI ethics assessment is much more involved and covers many more requirements.

Automated bias system processes drove poor human outcomes

  • The automation used in the Scheme, that removed the human element, was a key factor in the harm that it did(3)”
  • The system was not adequate to identify everyone who might fall within a vulnerability category and need help. In some cases people who had vulnerabilities were missed because of certain system settings.  For example is you were the recipient of a sickness allowance this payment was flagged a “non-activity payment” and if you had that flag then the vulnerability exclusion criteria was not applied to you so you were also caught up in the scheme even though being sick is a fairly clear indicator that you are likely vulnerable.
  • Letters were sent to the address the agency has on file which if you hadn't received welfare payments for some years may be an old address resulting in some people not knowing about the debt until a debt collection agency called them

The following figure represents the steps that were automated under the Scheme(1):

During an AI ethics assessment algorithmic bias is assessed in detail with the assessor working with stakeholders to develop requirements and build an evidence base for proving the requirements are met that ensures:

  1. The organisation develops a clearly explainable model for how the system will work and an ethical bias profile.
  2. Types of legally/justifiably protected/sensitive characteristics that are used by the system and clearly identified.
  3. An intervention plan is in place for when automated system behavior becomes unacceptably biased, including: specified intervention triggers and a protocol for how to initiate a corrective intervention is developed. This would have ensured vulnerability flags were assessed and adjusted over time on a regular basis and that action was taken to remediate the model used by the system to make decisions taking into account the feedback being received by impacted people.
  4. The data ingested into the system is done so with careful consideration, risk assessment and ongoing reviews.
  5. Sufficient and skilled resources are available to address an acceptable bias in an appropriate time frame relative to the severity of the impact.

Staff were ill equipped to deal with the concerns being raised by distressed impacted citizens.

  • Technical training in relation to the scheme was given to staff was inadequate. A total of 2.5 days of training were provided where they were presented with complex topics covering legal matters, technical constructs, compliance and cost calculations.
  • Staff were not adequately trained to deal with vulnerable, distressed and at risk people.
  • Staff reported that work load increased as volume of system outcomes increased.

During an AI ethics assessment ethical accountability is assessed in detail with the assessor working with stakeholders to develop requirements and build an evidence base for proving the requirements are met that ensures:

  1. Staff fully understand the processes and the automated parts of the system and have a process for triggering reviews.
  2. A process is in place to deny continuation of activity and assess the context before proceeding when significant distress is created.
  3. Processes are in place for effective human oversight that mitigate againsts harmful or detrimental human intervention.

The privacy and context of the data associated with humans was not well managed over time

  • Two government agencies matched data to identify people and only high confidence match’s were meant to be passed from ATO to DHS but Medium matches were also being passed until 2019.
  • Data matching processes established in 2004 to ensure unmatched data was destroyed was not adhered to and DHS used historical data and data that under this process should have been destroyed.

During an AI ethics assessment ethical privacy is assessed in detail with the assessor working with stakeholders to develop requirements and build an evidence base for proving the requirements are met that ensures:

  1. Assessment of data points and their context is made with particular focus on identifying alternatives for using data and ensuring all data used and its purpose is clearly defined within the context of the automated system.
  2. Efforts to maintain an ethical profile of the systems with respect to accountability, privacy and transparency requirements and criteria/behaviors are made.

Concerns and issues raised were not taken into consideration

  • Advocacy groups who tried to raise the matter and were ignored.
  • Concerns raised by staff were ignored.
  • The Administrative Appeals Tribunal made 554 decisions about cases brought to them between 2016-2022 and in 79% of these cases they reported to be not satisfied that the department’s calculations were accurate because income averaging was used to calculate the debt or part of the debt.  DHS did not appeal so we can only assume they agreed with the decision and no changes were found to be made following these multiple outcomes(5).

During an AI ethics assessment ethical accountability is assessed in detail with the assessor working with stakeholders to develop requirements and build an evidence base for proving the requirements are met that ensures:

  1. Ethics oversight roles and the role of ethics in all roles is addressed and implemented.
  2. Random and systematic errors are addressed effectively noting that automated systems will always have random errors in its workings that could create risks/negative impacts, so it is critical that a structure and process is in place to manage issues and mitigate first- and second-order impacts on stakeholders and citizens.
  3. Human-related systematic errors due to omissions and commission during a systems life cycle is addressed.

Poor ethical accountability as stakeholders and processes were not well documented or understood

  • Haphazard and inconsistent documentation of the automated Decision processes meant even the Royal Commission couldn’t rely wholly on technical documents provided(2)
  • A number of reports from both consulting and legal firms were never delivered or completed(1).

During an AI ethics assessment ethical transparency is assessed in detail with the assessor working with stakeholders to develop requirements and build an evidence base for proving the requirements are met that ensures:

  1. A system design overview is open, accessible, and takes user needs into account and is well documented. A precis of the design is made accessible to the public.
  2. The organisation is able to uphold transparency confidence by ensuring design and features are clear to users and stakeholders.
  3. Roles are refined to ensure actions taken are defined and outcomes reviewed.

Communication was insensitive to human needs

  • The ways debt collectors attempted to convince people to pay their debts was by telling them of the consequences that might arise if they did not pay.
  • Call scripts were used by operator to read out telling them the consequences of non-payment could that “the Department of Human Services may garnishee your wages, tax refund or other assets and income (including bank account) or refer this matter to their solicitors for Legal Action. They may also issue a Departure Prohibition Order, which will prevent you from travelling overseas. An interest charge may also be applied to your debt if you do not repay the amount in full or make an acceptable payment arrangement.”
  • Debt collectors could contact people in writing, by phone or by electronic communication (e.g. SMS) and were generally permitted up to two contacts with a debtor each week. This meant that a person could potentially receive up to 48 contacts or attempted contacts from a debt collector over a six -month period.
  • Getting contacted by a debt collector was in some cases the first time the person found out about the debt they hadn't received welfare payments for some years so the letter was sent to an old address.

Example of written correspondence sent to people(1):

During an AI ethics assessment ethical privacy is assessed in detail with the assessor working with stakeholders to develop requirements and build an evidence base for proving the requirements are met that ensures:

  1. The use of technologies don't overstep the bounds of dignity or appropriateness by either overfitting certain characteristics or drawing unreasonable inferences based upon isolated data points.
  2. Feedback loops are created to that people impacted who raise concerns over processes are heard and there concerns adequately addressed.

Overview of the Robodebt process

  1. One government agency (in this case the Department of Human Services - DHS) sent another agency (in this case the tax office) a list of people receiving welfare payments.  The tax office would match people on basic personal information like name and date of birthdate and send back matched people with income data.
  2. The system automatically moved people from one stage to the next based on a set of system rules.
  3. The system would decide if you were to be subject to a “compliance review” based on what the data told the system about you.
  4. Data that would exclude you from a "compliance review" included being dead, legally blind or having not had minimum payment made to you.
  5. There were other categories of people who were indefinitely excluded like people in prison and victims of domestic violence, and some were temporarily excluded such as people who were bereaved or in a disaster zone. Other filters were applied in 2017.
  6. People who had particular identified vulnerabilities were subject to staff-assisted compliance reviews.
  7. If you didn't fall into these categories the system would then apply a compliance risk rating that was calculated based on the discrepancy between the employment income one agency knew about you and the tax offices reported income.
  8. If there was a discrepancy you would receive a letter sent to the address the agency has on file.
  9. To respond to the letter you could contact the agency to discuss the debt and make a payment plan or inform the agency of a new income number via their online service and if it was within 1% (later increase to 5%) of what the system expected it to be you would be moved to next stage or additional risk ratings would be applied.
  10. Further system based risk rules and calculations then determined if debt should be raised against them and people would be sent automated letters requesting payment of the debt.
  11. Debts were automatically referred to debt management agencies if they weren’t responded to within 42 days or welfare payments were withheld if the person was still a recipient of welfare payments.

References:

1 - Royal Commission into the Robodebt Scheme Report

2 - Page 471 Royal Commission into the Robodebt Scheme Report

3 - Page 488 Royal Commission into the Robodebt Scheme Report

4 - Page 506 Royal Commission into the Robodebt Scheme Report

5 - Page 556 – 557 -Royal Commission into the Robodebt Scheme Report

6 - For the purposes of this article we are referring to an AI Ethics assessment conducted in accordance with the IEEE AI Ethics Assessment Process established under the CertfAIed program.

Fiona Long, 7th July 2023

Secure your business.

"assurance"

confidence or certainty in one's own abilities.

“The business has given us assurance that they have security in place to protect our information”

Our Difference

Established and lead by industry experts.

At the helm of our privately owned, global RegTech firm are industry experts who understand that security controls should never get in the way of business growth. We empower companies large and small to remain resilient against potential threats with easily accessible software solutions for implementing information security governance, risk or compliance measures.

We support businesses every step of the way.

We don't just throw a bunch of standards at you and let you try and figure it out! We have designed a thoughtful way of supporting all businesses consider, articulate and develop security controls that suit the needs of the organisation and provide clever reporting capability to allow insights and outcomes from security assessments to be leveraged by the business and shared with third parties.

Our customers are the heart of our company.

Our platform places customers at the heart of our design process, while providing access to expert knowledge. With simple navigation and tangible results, we guarantee that all data is securely encrypted at-rest and in transit with no exceptions – meeting international standards with annual security penetration testing and ISO 27001 Certification.