By Katarina Lind and Leo Wallentin of Journalism++. Part of this story is available in Swedish at Dagens Samhälle under the title Trelleborgs biståndsrobot spred känsliga uppgifter.
Trelleborg is Sweden’s front-runner in automating welfare distribution. An analysis of the system’s source code brought little transparency – but revealed that the personal data of hundreds was wrongly made public.
Trelleborg is a city of 40,000 in Sweden’s far south. Three years ago, it became the first municipality to introduce fully automated decision-making in its social services. They named their robot Ernst and introduced it as a digital co-worker.
Sweden’s social services are governed by local authorities. The 1992 Local Government Act gave decisionary powers to municipal committees, but this right can be delegated to an employee. With the exception of Trelleborg and their lawyers, all other instances assess that delegating decision-making to an automated system is not allowed, and therefore automated decision-making is not compatible with the law.
The same does not apply to state agencies. In 2018, automated decisions were allowed after a change in the Administrative Procedure Act (Förvaltningslagen), the law that regulates governmental agencies. Welfare payments such as parental benefits and dental care subsidies are now allocated without any human intervention.
Trelleborg uses a process known as robotic automation, or RPA, to handle applications for financial aid. The software is based on different rules that lead to a yes or no decision.
The first time Trelleborg residents apply for financial aid, they meet a caseworker in person. After that, they must reapply every month, and if they apply online, the decision will be made by a machine. They fill in details on their income and expenses, which the RPA compares with the previous month. It also pulls information such as tax and income statements and student loans from a database that gathers personal data from seven agencies, for controlling purposes. A decision is then made based on these data points.
Should the applicant’s situation significantly change from one month to the next, the software stops and forwards the application to a human caseworker. Around one in three reapplications are currently handled by the software. The rest is treated by caseworkers because of circumstances the software cannot handle.
Because every beneficiary meets a caseworker the first time they apply and new circumstances are checked by a human being, there is always an individual assessment made, Ms Schlyter said.
The main reason for deploying RPA was to save time and relocate resources to meet people instead of handling documents, according to Ms Schlyter. It also shortens the time for beneficiaries to obtain a decision, as decisions that previously could have taken two days can now be reached in less than a minute.
The introduction of the RPA and the relocation of staff also led to lower payments for the municipality, she said.
During the last few years, many towns started using online applications for welfare distribution, a first step towards automating the process. A report by the Board of Health and Welfare (Socialstyrelsen), a national authority, showed that the number of municipalities that introduced online applications for welfare more than tripled over the last three years, from 9 percent in 2017 to 29 percent in 2019.
Another report, published in November 2019 by the Swedish Association of Local Authorities and Regions (SKR), showed that the trend continued upwards, with 36 percent of municipalities saying that they used online application systems.
However, few municipalities use automated processes. The SKR survey found that 8 percent of the municipalities used some form of automation and only one (Trelleborg) used it for decision-making. Things may change rapidly, as 40 percent of the municipalities said they were planning to introduce automation of administrative work over the next few years.
Redefining social work
Most of the automated tasks, such as handling invoices, are uncontroversial. These programs are not especially “smart”: they are quite simple rule-based algorithms. But introducing automated decision-making in the welfare system sparked a discussion about the profession of social work and what social assistance should be.
“Financial aid is society’s safety net, and has to be assessed individually by a professional social worker. When you replace these professionals with software, many social workers feel it is a threat to their profession,” said Lupita Svensson, a researcher at Lund University’s School of Social Work.
Ms Svensson recently wrote a report about automating the welfare sector (Technology is the easy part, published in November 2019). She said that, over the last 20 years, decisions about financial aid moved away from individual assessments and towards more general, rule-based decisions.
“Initially, the legal text about financial aid gave social workers a great deal of room to manoeuvre, since the law was saying that you couldn’t generalise. When this law is converted to code, it becomes clear that social work has changed. By converting law to software, the nature of financial aid changes, as you can’t maintain the same individual assessments as before.”
Ms Svensson is also concerned by the idea that an algorithm could be impartial.
“The municipal sector has a naive view of technological advances. They think a “robot” will be impartial and objective. But how were these robots constructed? When I asked municipalities about this, they told me they followed the social workers’ processes. This means there’s a risk of copying in the norms, ideas and values that are already present in the system. There’s very little critical discussion of this.”
When Kungsbacka, a town of 20,000 inhabitants 300 kilometers north of Trelleborg, introduced the ”Trelleborg model”, as it became known, in 2018, 12 of 16 social workers left their work in protest. Some of them have returned to their jobs but the majority left for good.
Inger Grahn, a local representative for the Union for Professionals in Kungsbacka, said that the protest was about two things. Firstly, the ”Trelleborg model”, or at least its automated component, might not be legal. (Kungsbacka has not implemented full automation as of early 2020.)
Secondly, implementing the Trelleborg model requires a major reorganisation of municipal services. It shifts responsibility for financial aid from the department of social services to the department of work.
Kungsbacka’s case workers said that this model might prevent them from getting the whole picture of a beneficiary. By focusing on getting beneficiaries directly into work, social issues such as children’s welfare could be missed.
Technology cannot solve everything, Ms Grahn said. “As far as we know, there aren’t yet any algorithms that take individual cases into account sufficiently to follow the law. Not when it comes to children with special needs, or any other kind of individual case,” she added.
Looking for transparency
One central concern with automated decision-making is transparency. How can automated decisions and the underlying algorithms be explained in a way everyone understands? And are algorithms official records that can be communicated to the public?
Simon Vinge, chief economist at the Union for Professionals (Akademikerförbundet SSR), has sought answers for over a year. In June 2018, he asked Trelleborg how the algorithm made decisions and how their system worked, but he did not receive satisfactory answers. After he sent a complaint to the Swedish Parliamentary Ombudsman (JO) in September 2019, he received some screenshots and a flow chart. Mr Vinge and the Union for Professionals argue that the information does not suffice to really understand how the program works, and asked for ‘meaningful information’, in the sense of Article 15 GDPR, about how a decision is made.
“When it comes to automated decision-making, no one knows what they have to share, or when you’ve received enough information to understand how an automated decision was made. I still don’t know which parameters lead to a declined application, or what is being fed into the formula,” Mr Vinge said.
Trelleborg replied that they had given all the information they have been asked for. The JO shall make a decision on the case in the coming months.
“If it’s difficult to explain how a simple rule-based algorithm works, how can we hope to explain more complex systems like machine learning?” Mr Vinge said.
Analyzing the code
Last fall, Freddi Ramel, a journalist, requested the source code of the software in Trelleborg under Sweden’s Freedom of Information Act. When Trelleborg said it was not an official document, Mr Ramel lodged an appeal to the administrative court of appeal. Trelleborg argued that the code was a trade secret, but the court decided otherwise. The source code is an official document, judges said, and it was communicated to Mr Ramel.
The code that Trelleborg finally shared is made of 136,000 lines of rules, spread out across 127 XML files. Some of the files seem to contain older, unused rulesets. Without access to the data used by the software, it is impossible to understand the rules with any certainty. The code interacts with other pieces of software, making the deciphering effort all the more difficult. But it is possible to (quite painstakingly) start outlining a general decision tree.
Without clear explanation from the municipality, the system remains a black box. Having the code does not change anything, Mr Vinge of the SSR union wrote in an email.
Personal data leak
The analysis of the code yielded not just some of the rules guiding the RPA. It contained the names and social security numbers of approximately 250 people, seemingly citizens who previously had welfare-related contacts with the municipality. This data seems to have been in the code since 2017, and is now visible for anyone who filed a FOI request to see the code, as well as the subcontractors working on it.
Trelleborg municipality are currently investigating why the personal data ended up in the code, and why the code was not screened before it was made public.
Even though Trelleborg introduced their ”robot” three years ago, the government has just begun looking into this issue. In January, Stockholm ordered an investigation into the use of automated decision-making by municipalities and regions. It will be published in March 2021.
Photo by Charlotta Wasteson on Flickr