The notion of transparency is a continuous progression of values varying by minute degrees between the three metaphors described – increased access to information, open decision-making and the decision-makers themselves.
The latest viral trend on TikTok is generating a huge amount of excitement and is engaging hundreds of thousands of followers. Life-like images of individuals and in some cases, almost identical ‘twins’ are being built by Artificial Intelligence (AI) and then shared on TikTok. Users are asked to try to find their AI double and do a duet with it on TikTok to show a side-by-side comparison.
In the era of fake news, this trend raises a whole set of legal and ethical questions which should concern all of us. There are now hundreds of thousands and soon, possibly millions of young people being encouraged to use AI to build images of themselves and then hand over the rights to the world to use however they wish. Perhaps more alarmingly, users can build images of other people. Potentially, an ex can use the AI software to build a near-identical ‘twin’ of their former partner and hand over the rights for it to be used anywhere. What is the legal position for the individual who’s AI ‘twin’ has been placed atop a porn site or used for a fake ID? It’s a clear legal, moral and ethical conundrum. But for Gen Z TikTok users, finding their face and posting a ‘duet’ video appears to be a highly desirable case of digital serendipity and the sharply rising engagement suggests it’s going to build.
One user @artbreed3r issues the challenge: ‘creating faces until people who look like them duet it’.
It comes from an eerie computer-generated voice as each 15-second video opens with short clips of adjusting facial features on an AI software – the mouse dragging a bar to reach different degrees of ‘blonde hair’, ‘brown eyes’, ‘happy’, ‘angry’, etc. And then the final product is revealed – a life-like digital portrait. The account is solely dedicated to creating these random faces, posting videos about them, and hoping an individual with the same face in real life will stumble upon it and do a duet showing a side-by-side comparison. This is just one user of the software – called Art Breeder – and has already amassed over 400,000 followers. This trend has also generated significant engagement – buzzing comment sections, thousands of people looking for their faces and trying the software themselves. Finding their face seems to be the Gen Z equivalent of going to a gift shop and finding a keyring with their name on it – albeit a case with a much rarer statistical probability.
Of course, these cases happen. Some of them are so accurate it’s frightening, as are the comments which compare the faces to the users’ cousins, mothers, friends etc. What happens with the final product? When looking beyond the initial excitement of using technology and AI in clever ways, there are some major legal and ethical questions. There are also questions of racism, of reinforcing ethnic, cultural, and gender stereotypes which even now some early adopters are starting to raise. Many users are further discontent with the user’s creations for not being inclusive in their representation, with comments such as: ‘all of their faces are perfectly symmetrical. I will never be on here’.
The AI software is a free-to-use website called Art Breeder created by Joel Simon. Users can upload images, blend ‘parent’ images together, or edit an existing portrait’s ‘genes’ to create a new face. Some of the tools are simple – choosing degrees of brightness for the portrait or certain colourways. More alarming categories are racial – choosing what degree the face is ‘Asian’, ‘white’, ‘black’, ‘Latino-Hispanic’, ‘middle-eastern’ and ‘Indian’. Of course, this raises questions of stereotyping and reinforces ideas of certain racialized features. Accompanied by the complaints of narrow representation, is there an argument that AI software like Art Breeder are inherently racist? The idea of racialized features feels close to profiling and predictive policing.
With a free account, users get 3 chances to upload an image and generate a portrait from an existing picture of a face. When tried with singer Harry Styles using an image which already existed in the public domain, a lifelike copy of Mr Styles – not a photograph but an AI-generated duplicate was formed. And from there the creative agency was all of the maker’s, enabling the adjustment of any feature and making an unlimited amount of derivations of his face.
Potentially, an AI double can be given freely to anyone to be used anywhere – a porn site, a wanted poster, a fake ID. This has echoes of a recent case concerning supermodel Emily Ratajkowski. Artist Richard Prince included a picture of Ratajkowski in his series of ‘Instagram Paintings’ – Instagram photos with added captions by the artist were printed on canvas. Hers was sold for $80,000. Speaking to The Cut, Ratajkowski was concerned that her image was used, printed and sold and she had no legal claim over it. ‘It seemed strange to me that he or I [her boyfriend at the time] should have to buy back a picture of myself — especially one I had posted on Instagram, which up until then had felt like the only place where I could control how I present myself to the world, a shrine to my autonomy.’ Using TikTok and Art Breeder, AI versions of images of people can be easily created and distributed with no rights for the ‘real’ person. Now, this can potentially happen on an industrial scale.
While many avid TikTok users enjoy it, not too many would likely be ready to hand over an AI-generated portrait of themselves and have no rights to how that portrait is used. Artificial Intelligence is likely to bring sweeping changes to business and society, but it also poses some serious legal, moral and ethical questions about identity and ownership that tech companies, users and regulators need to get their heads around.
To use the Oslo Accords to justify a lack of intervention is more than questionable.
A canvas for retailers to experiment with new ideas, products, or markets, the pop-up concept is proliferating across retail sectors, fuelled by its low-risk, low-cost proposition.
The recent healthcare legislation’s basis is a salary increase for doctors provided that they sign a contract. However, even with a guaranteed raise, the contract’s collateral terms and conditions are worrisome.