Skip to main content

This 28 January is European Data Protection Day, a date proclaimed by the European Union that aims to inform and raise awareness of the rights and obligations of internet users in this regard.

Among the new applications in the use of personal data we find the irruption of Artificial Intelligences. Due to their access to sensitive information, these tools have become one of the main sources of concern for the authorities on an international level. Especially in places such as the European community, as we will see in this post.

What are the data protection challenges of AI?

Artificial intelligence is growing at a dizzying pace, driven by new market needs and societal interest. In this context, a number of data protection challenges are emerging. Some of these challenges relate to privacy and transparency, consent, security, excessive data collection or international cooperation.

It is important to understand the scale of the AI market, as spending on AI solutions is expected to exceed $500 billion by 2027, according to a study by IDC. This represents a significant shift in investment towards the implementation of AI and the adoption of AI-enabled products.

Uncontrolled data collection

Today’s hyper-connected society is facing a privacy challenge. Today’s AI systems, with unregulated and indiscriminate access to data of all kinds on the web, make our personal information more exposed than ever before.

Internet of Things (IoT) systems are increasingly present. From means of transport such as autonomous cars, to access control (with facial recognition systems), our own mobile devices, or even healthcare equipment. They all collect and manage huge amounts of personal data. This poses a security problem because, without regulation or control, these systems can become a source of valuable data for criminals.

Regulation and control

It is therefore worth considering whether we have sufficient regulation that not only guarantees the security of our data but also limits the control that companies have over our information.

It is precisely because of this boom in the trade in personal data that the EU Artificial Intelligence Act has recently been introduced. A law that aims to cover all the shortcomings and legal loopholes that have arisen around this new technology due to its unexpected growth and rapid development.

This new law is a first step in regulating the competencies and scope of these systems, avoiding abuse of individual rights and data protection for both companies and individuals. The main aim is to prevent abuses that could affect individual freedom, such as biometric surveillance or influencing voting, among others.

Regulation and quality data, the right pairing

AI promises an encouraging future, full of breakthroughs and innovation. However, it also reflects the need for effective regulation and control, addressing issues such as the governance and localization of the data it manages. Progress must also continue to be made in the implementation of international laws that not only regulate these tools but also facilitate cross-border data exchange, always bearing in mind the protection of individual and organizational data. 

On the other hand, the key is also in the data that feeds this AI. Trust in it is key for organizations to adopt this technology without fear. The inappropriate use of deepfakes, the risk in the dissemination of information, and other challenges ahead show that it is necessary to generate this trust. One way to do this is to improve the data with which AI is “educated”.

At the enterprise level, we are seeing steps in the right direction. At PUE, we see all kinds of companies relying on us to improve data governance, origin, and lineage. Companies need to have strategies in place to verify the reliability and authenticity of their data and implement robust governance. Only in this way will it be possible to use AI with confidence to gain more useful and valuable insights.