Legal development

Analysis of DSK Guidance

Triangular Colorbond profiles

    From Data Bytes 47

    On 6 May 2024, the German Data Protection Conference ("DSK") has published its "Guidance on data protection and artificial intelligence" ("Orientierungshilfe KI und Datenschutz") ("DSK Guidance"). It provides an overview on the various criteria that controllers need to consider from a data protection perspective when using AI applications, with a particular on using large language models ("LLMs"). The DSK Guidance shall help controllers – and by implication also developers and providers of AI systems – select, implement and use AI applications in a way that respects the rights and freedoms of data subjects and complies with the GDPR.

    The DSK identifies three main sections: 1) conceptualizing and selecting the appropriate AI application, 2) implementing AI applications, and 3) using AI applications. Each section contains a number of questions and considerations that controllers should take into account before, during and after the deployment of AI applications. Some of the key aspects covered by the document include:

    • Determining the fields and purposes of use of the AI application and their lawfulness.
    • Avoid processing personal data or special categories of personal data in the training data, if possible; identifying the legal basis for such processing, if necessary.
    • A preference for closed systems over open systems, where the data processing takes place in a delimited and technically secure environment and where the input and output data lies with the users.
    • Transparency and information obligations towards the data subjects, especially regarding the logic, scope and possible impact of automated decision-making, including profiling, based on AI applications.
    • Transparency and choice options regarding the use of input and output data for the training of the AI application and the storage of the input history.
    • Implementing the data subject rights, such as the right to rectification, erasure, restriction, data portability and objection, and providing organisational and technical procedures that allow a data subject to exercise their rights effectively.
    • Involving the data protection officer and the employee representation, if applicable, in the decisions on the use of AI applications.
    • Establishing clear responsibility and contractual arrangements between controllers using AI applications and the providers of AI applications, as well as the assessment of possible scenarios of joint or separate controllership and data processing situations.
    • Conducting an initial risk assessment and, if required, a comprehensive data protection impact assessment, before the processing personal data through AI applications, and consulting the data protection authorities, if necessary.
    • Providing devices and accounts for the employees who use AI applications for professional purposes, and thereby protecting their personal data and usage data.
    • Applying the principles of data protection by design and by default; implementing appropriate technical and organisational measures to ensure data security and resilience.
    • Verifying the accuracy and non-discrimination of the results of the AI applications; conducting an assessment on such results as to whether they are relevant and suited in relation to the underlying prompts.
    • Monitoring the current legal and technical developments that may affect the data processing and risk management with AI applications, and adapting the internal policies accordingly.

    In line with the DSK's introductory statement, further clarification will be helpful, in particular regarding the following:

    • Right of rectification and right of erasure: The DSK suggests that controllers can fulfil the right of rectification (e.g. where the AI application provides incorrect output through hallucination) through adjusting the training data and fine-tuning, including using filtering tools (Section 1.11 DSK-Guidance). Where a data subject requests erasure, the controller must be aware of the risk of de-identification through correlation of different data sources. The DSK points out that filtering-tools applied to the output can be helpful to suppress personal data, even though this will not result in a definitive deletion / erasure of the personal data concerned.
    • Risk and impact assessment: The DSK points out that controllers must always assess the risk related to the nature, scope, purpose and circumstances ahead of commencing a processing activity. In addition and typically, controllers will need to conduct a data protection impact assessment ("DPIA") when applying LLMs. In that context, the DSK recommends that controllers only use such AI applications which provide the controller with all required information, in order to conduct such DPIA. Further, the DSK points at their reference list which determines in which scenarios a controller must always conduct a DPIA. At this stage, however, the current list has not yet been updated to address the use of AI systems.
    • Joint Controllership: The DSK points out that processing entities could be considered joint controllers where they take complementary decisions on a joint purpose and the means of processing personal data ("converging decisions" of joint controllers). The DSK sees this condition fulfilled where the processing operations are inextricably connected – such as in cooperations where an AI application is fed or trained by different data sources, or where an AI application run by a controller on its platform is further developed by other controllers into a new AI application. The DSK emphasizes that, in such case, the controller concerned must not have factual access to personal data being processed, in order to be deemed a joint controller with the other entities of that cooperation. As a result, all cooperating partners will need to enter into a joint controller agreement (Art. 26 GDPR). In going forward, this important statement may require further clarification through examples and use cases, in order to provide users of GenAI systems with legal certainty when they need to insist on a joint controller agreement or not.

    In any event, users and developers of AI systems need to be vigilant on the various activities and statements of the data protection authorities in Europe (e.g. the CNIL recommendations on AI, Ashurst summary of CNIL recommendations on AI; the Hamburg DPA's Checklist on controlled usage of LLM chatbots), as well as current investigations such as the complaint by NOYB with the Austrian DP authority (NOYB complaint against OpenAI with Austrian DPA).

    The information provided is not intended to be a comprehensive review of all developments in the law and practice, or to cover all aspects of those referred to.
    Readers should take legal advice before applying it to specific issues or transactions.

    image

    Stay ahead with our business insights, updates and podcasts

    Sign-up to select your areas of interest

    Sign-up