top of page

Implementation

Image by Giammarco Boscaro

Regulation

Your Symbolic Expressions belong to You, not Government or Big Tech

Symbolic Rights need to be enshrined into law, and recognised as human rights, in order to preserve and protect our symbolic expressions against AI rollouts. The Symbolic Rights Foundation advocates for a baseline assumption whereby each system should always assume that the individual does not want to use AI (Presumption of Exclusion), in order to ensure they do not feel supplanted or atomised by the technology. If consent to use AI is given by the individual, SR's Right to Customise, should then apply. This spectrum of choice should be available to the user to decide how little or how much of their personality is used, where, in what way and for how long. This level of autonomy is essential to ensure that the individual's identity is not used by AI in ways that have not been permitted, and to prevent the individual from feeling that their identity has been violated. Wherever an individual's permitted expressive identity has been used or partly used (by a certain percentage) for profit, then the user should be entitled to compensation for this. Time limits and adjustments for identity data usage are therefore an essential safeguard to ensure fair use and to give the individual a better capacity to negotiate any compensation. The right to erasure or correction should also be guaranteed by law for those who change their mind or wish to amend any symbolic inaccuracies. Developers would therefore need to be fully transparent about the use of symbolic information and how it affects both individuals and other AI users alike. This would include transparency with AI-produced creative outputs, such as art, to ensure differentiation from human creativity. This will also create a fairer competition between AI generated art and human made art.

​

The Presumption of Exclusion would also indirectly mean that individuals would still be entitled to use traditional systems, in areas such as public services, social events, travel, dating apps, education, health, employment, banking etc. This would help to preserve genuine human connections, identity and symbolic communities.​

​

Importantly, Symbolic Rights should ensure that the entirety of the personal identity, as a whole, is protected, to respect the fact that the individual is a unique combination of both personal and general expressions. Whilst data mining is expected to occur from expressive content that is already freely available (e.g. online), the user's Symbolic Rights settings should be guaranteed at the direct point of use with AI. Considerations should also be given on the level of decision-making the user wishes to defer to AI.

​

Users should be allowed to determine how much of the content they generate from AI is from human expressions. This will ensure that AI does not subtly become too "human-like" without consent and acts to prevent a possible unintended emotional connection. It would also serve to make AI more unique in its expressions in order to avoid conflicts and to differentiate itself from human expression (thereby further encouraging competition).

 

 Sacred cultural and religious symbolic expressions should be given special collective protections to ensure their continued diverse meaning and to ensure that existing cultural homogenisation does not become amplified in AI systems. This will provide an essential protection for minority and nuanced cultures, which may not otherwise be fairly represented.

​

Symbolic Rights should also protect in law those who wish to be AI-free or "AI-light" from punitive action (social, legal, employment etc.). Law may also need to protect against social stigma for such individuals.

 

All individuals should also be able to control how much of their existing expressions (e.g. online posts) is used by AI (whether they use it or not).

​

It should be noted that although there are similarities with copyrighting, pattern recognition and replication is a new threat, Symbolic Rights should recognise symbolic expressions as qualitatively important rather than just as intellectual property or data. This ensures that the law considers symbolic protections as an essential human right and a right to identity. Using someone else's property to make money without consent is one thing, but to do this with someone's personal or cultural identity, to the point that the original symbolic meaning is all but removed from the expression, is even more demonstratably unethical and immoral. It is only from this perspective that truly human-centric AI can complement, rather than replace, human meaning-making and help preserve human dignity.

​

 These legal protections would ensure a progressive, equal, diverse, inclusive, autonomous and transparent AI experience and would set the scene for an exciting and strategic economically prosperous AI future.

​

​

​

Industry Practice

The regulations, as listed above, would help to define the parameters for further ethical frameworks as part of good industry practice. This ethical framework would likely focus on the following key themes:

​

  • Neutral, universal symbolic software to ensure consistency, trust and seamless usage between AI systems

  • Tools to enable better cultural preservation

  • Tools to promote symbolic identity

  • R&D to discover new ways to help individuals enhance and enrich their symbolic experiences rather than replace them (i.e. through creative development, cultural exploration and expansion etc.)

  • Educational tools to help users fully understand and employ their Symbolic Rights

  • Feedback loops to allow users to tell AI how well it respects and enhances their expressions, allowing for continued AI learning to better serve identity

  • Certification

​​

These areas would open the door to an ongoing and evolving ethical discussion based on how AI should interact with identity going forward.

​

Overall, without such protections, there's a risk that the nuance, context, and emotional depth of human expression could be diluted or misappropriated in the vast data pools that fuel AI. Symbolic Rights are a proactive (rather than reactive) approach that aims to ensure that even as data becomes more commodified, the intrinsic value of our personal and cultural expressions isn’t reduced to mere inputs for profit or convenience and that we decide the level of decision-making that we want to retain.

bottom of page