Revista Multidisciplinaria Perspectivas Investigativas
Multidisciplinary Journal Investigative Perspectives
Vol. 6(1), 47-57, 2026
https://doi.org/10.62574/rmpi.v6i1.507
47
Strict liability for damages arising from artificial intelligence: A critical
analysis in light of the risk framework of the Ecuadorian Civil Code
La responsabilidad civil objetiva por daños derivados de la Inteligencia
Artificial: Un análisis crítico frente al sistema de riesgos del Código Civil
ecuatoriano
María Estefania Baldeon-Navarrete
mbaldeonn@unemi.edu.ec
Universidad Estatal de Milagro, Milagro, Guayas, Ecuador
https://orcid.org/0009-0002-1286-9494
Rously Eedyah Atencio-González
reatenciog@ube.edu.ec
Universidad Bolivariana del Ecuador, Durán, Guayas, Ecuador
https://orcid.org/0000-0001-6845-1631
Geovanna Michelle Nájera-Tello
Vanna_michelle@hotmail.com
Universidad Casa Grande, Guayaquil, Guayas, Ecuador
https://orcid.org/0000-0001-7487-7938
Esperanza Jamilet Vera-Anchundia
esjavean97@hotmail.com
Red de Investigación Koinonia, Guayaquil, Guayas, Ecuador
https://orcid.org/0000-0003-3241-6740
ABSTRACT
The aim of the research was to analyse strict civil liability for damages arising from artificial intelligence in
relation to the risk framework of the Ecuadorian Civil Code. A meta-analysis design was adopted, involving
a systematic search of the Scopus, Web of Science and Latindex databases, covering the period from
January 2015 to March 2026, and including 27 sources that met the established selection criteria. Five
recurring legal phenomena were identified: diffuse causation, subjective fragmentation of liability,
informational asymmetry, obsolescence of the concept of fault, and a regulatory coverage gap, which
highlight the inadequacy of the current civil framework for attributing liability for algorithmic harm. It is
concluded that Ecuador requires regulatory reform incorporating aggravated strict liability, joint and several
liability amongst technology providers, reversal of the burden of proof, a subsidiary guarantee fund, and
differentiated transparency obligations according to the system’s risk level.
Descriptors: artificial intelligence; civil liability; civil rights. (Source: UNESCO Thesaurus).
RESUMEN
La investigación tuvo como objetivo analizar la responsabilidad civil objetiva por daños derivados de la
inteligencia artificial frente al sistema de riesgos del Código Civil ecuatoriano. Se adoptó un diseño de
metaanálisis mediante la búsqueda sistemática en bases de datos Scopus, Web of Science, Latindex, con
un período de cobertura entre enero de 2015 y marzo de 2026, incluyendo 27 fuentes que cumplieron los
criterios de selección establecidos. Se identificaron cinco fenómenos jurídicos recurrentes: causalidad
difusa, fragmentación subjetiva de la responsabilidad, asimetría informacional, obsolescencia del concepto
de culpa y laguna normativa de cobertura, que evidencian la insuficiencia del marco civil vigente para
atribuir responsabilidad por daños algorítmicos. Se concluye que el Ecuador requiere una reforma
normativa que incorpore responsabilidad objetiva agravada, solidaridad entre actores tecnológicos,
inversión de la carga probatoria, un fondo de garantía subsidiario y obligaciones de transparencia
diferenciadas según el nivel de riesgo del sistema.
Descriptores: inteligencia artificial; responsabilidad civil; derechos civiles. (Fuente: Tesauro UNESCO).
Received: 19 January 2026. Reviewed: 25 January 2026. Accepted: 23 March 2026. Published: 27 March
2026.
Research articles section
Revista Multidisciplinaria Perspectivas Investigativas
Multidisciplinary Journal Investigative Perspectives
Vol. 6(1), 47-57, 2026
La responsabilidad civil objetiva por daños derivados de la Inteligencia Artificial: Un análisis crítico frente al sistema de riesgos
del Código Civil ecuatoriano
Strict liability for damages arising from artificial intelligence: A critical analysis in light of the risk framework of the Ecuadorian
Civil Code
María Estefania Baldeon-Navarrete
Rously Eedyah Atencio-González
Geovanna Michelle Nájera-Tello
Esperanza Jamilet Vera-Anchundia
48
INTRODUCCIÓN
The 21st century has established artificial intelligence (AI) as one of the most significant
transformative forces in the contemporary legal system; its widespread deployment in sectors
such as healthcare, justice, finance and public administration creates scenarios in which
decisions made by algorithmic systems have direct and indirect effects on individuals’ subjective
rights, such that legal science is compelled to ask whether the traditional instruments of civil
liability, designed to regulate voluntary human conduct, are suitable when the agent causing the
harm is a machine equipped with autonomous learning capabilities.
From a historical perspective, the law has responded to each technological revolution through
the progressive adaptation of its categories: the advent of the automobile gave rise to regimes
of strict liability for hazardous activities; nuclear energy introduced absolute liability; and the
digital age challenged the concepts of identity, privacy and authorship. AI, however, presents a
singularity that goes beyond these precedents, as its ability to learn, adapt and make decisions
in environments not foreseen by the original programmer blurs the boundaries between the
designer, the operator and the autonomous object as sources of legal causation; as noted,
among others, by Čerka et al. (2015) and Parra-Sepúlveda and Concha-Machuca (2021).
In the Latin American context, Ecuador lacks specific legislation regulating liability arising from
the use of AI; the Ecuadorian Civil Code, which draws on the Chilean codification tradition of the
mid-19th century, structures non-contractual liability on the basis of proven fault or the
presumption of fault in certain cases involving dangerous activities, but does not provide for a
separate regime for damages caused by advanced technological systems; this omission is
particularly serious given that Ecuadorian case law lacks established precedents in this area
and that national legal scholarship is only just beginning the debate, as noted by Narváez-López
(2019) and Concha-Flores (2024).
The central question guiding this research is formulated as follows: is the strict liability regime
provided for in the Ecuadorian Civil Code sufficient to attribute liability for damages arising from
artificial intelligence systems, or is a regulatory reform necessary to incorporate specific criteria
in line with the technological nature of the causative agent? Its relevance lies in the fact that
regulatory gaps not only create legal uncertainty for victims but also act as a disincentive to the
responsible development of technology in the country, as highlighted by Reed (2018) and
Gallegos-Unda et al. (2025).
International literature has advanced the study of this problem from different perspectives:
authors from the Anglo-Saxon world have proposed frameworks of strict liability without fault, as
argued by Marchisio (2021); comparative research has examined the differential treatment
offered by systems of subjective, objective and hybrid liability, as documented by Díaz and
Mitrani (2025); and European regulatory developments, particularly the European Union’s civil
liability directives, have opened a debate that transcends continental borders, within the
framework analysed by Solaiman and Malik (2025); against this backdrop, the analysis of the
Ecuadorian case provides a perspective situated within a peripheral civil law system, with
institutional and technological capacities currently in the process of consolidation.
Thus, the objective of this research is to analyse strict civil liability for damages arising from
Artificial Intelligence in relation to the risk regime of the Ecuadorian Civil Code.
Strict civil liability: doctrinal foundations
Strict civil liability, also known as liability for risk or no-fault liability, constitutes one of the most
significant constructs of modern private law; unlike the subjective regime, which requires proof
of fault or intent on the part of the agent causing the damage, strict liability attributes the harmful
consequences to whoever introduces a source of risk into society, regardless of whether they
acted diligently or not; this formulation, with its roots in French and Italian doctrine, found
Revista Multidisciplinaria Perspectivas Investigativas
Multidisciplinary Journal Investigative Perspectives
Vol. 6(1), 47-57, 2026
La responsabilidad civil objetiva por daños derivados de la Inteligencia Artificial: Un análisis crítico frente al sistema de riesgos
del Código Civil ecuatoriano
Strict liability for damages arising from artificial intelligence: A critical analysis in light of the risk framework of the Ecuadorian
Civil Code
María Estefania Baldeon-Navarrete
Rously Eedyah Atencio-González
Geovanna Michelle Nájera-Tello
Esperanza Jamilet Vera-Anchundia
49
acceptance in Latin American legal systems through general clauses on dangerous activities, as
noted by Jácome Aguirre et al. (2023) and Guamán-Quinzo and Batista-Hernández (2024).
In the Ecuadorian Civil Code, non-contractual liability is based on Article 2214, which
establishes the obligation to compensate for damage caused by a wrongful act committed with
intent or through negligence, and on Article 2229, which provides for liability for activities that,
by their nature, are capable of causing harm to third parties; however, the application of these
provisions to AI systems faces the problem of the so-called algorithmic ‘black box’, that is, the
practical impossibility of determining precisely which element of the system caused the harmful
decision and whether there was any form of fault attributable to an identified human agent, as
noted by Rodríguez-Corría and Alba-Cazales (2025) and Jaramillo-Valdivieso (2024).
Ecuadorian legal doctrine has noted that the equivalence between gross negligence and wilful
misconduct, provided for in Article 29 of the Civil Code, introduces a presumption that could be
extended to the treatment of grossly negligent algorithmic errors; this construction, however,
proves insufficient when the harm arises from the system’s autonomous behaviour, not directly
attributable to the decision of any human operator, as noted by Jácome Aguirre et al. (2023);
strict liability therefore appears to be the mechanism most consistent with the distributive logic
of technological risk, as it falls upon whoever derives the economic benefit from the risky
activity.
Artificial intelligence and types of harm
The specialist literature identifies at least three categories of harm that may be caused by AI
systems: physical harm, financial harm and harm to fundamental rights such as privacy, equality
and dignity; the first category includes bodily injury or death caused by autonomous devices
such as vehicles, surgical robots or drones, the second category includes financial losses
arising from algorithmic decisions in stock markets, credit granting or insurance management,
and the third category arises when facial recognition systems, data profiling or automated
decision-making result in discrimination or infringe upon individuals’ privacy, as described by
Bottomley and Thaldar (2023) and Wang and Zhou (2025).
For legal analysis, it is pertinent to distinguish between weak AI systems, which execute specific
tasks predefined by the programmer, and strong AI or advanced machine learning systems,
capable of operating in unforeseen environments and modifying their own decision-making
parameters; this distinction matters because, in weak systems, the causal chain between design
and harm is relatively traceable, whereas in strong systems causality becomes so blurred that
no human agent can fully foresee or control the consequences of algorithmic action, as argued
by Erkan and Biswas (2025) and Custers et al. (2025).
Within the Ecuadorian justice system, the incorporation of AI tools for case management,
sentence prediction and evidence analysis introduces specific risks linked to the automation of
judicial decisions; on this subject, authors such as Aguas-Yáñez et al. (2024), Escobar-Escobar
et al. (2024), Zabala-Balladares et al. (2024) and Orozco-Zavala et al. (2024) have warned of
the dangers of algorithmic bias in criminal and civil proceedings, the lack of transparency in
decision-making and the absence of accountability mechanisms when the system produces
erroneous results.
Models of liability in comparative law
The debate over the most appropriate model of liability for AI-related damages has produced
three main proposals in legal scholarship: the first proposes maintaining the regime of
subjective liability with evidential adaptations, such as the reversal of the burden of proof or the
use of presumptions of fault, a position held by those who believe that strict liability discourages
technological innovation; the second proposes adopting a regime of pure strict liability, similar to
that applicable to nuclear activities, which dispenses with any consideration of fault and bases
Revista Multidisciplinaria Perspectivas Investigativas
Multidisciplinary Journal Investigative Perspectives
Vol. 6(1), 47-57, 2026
La responsabilidad civil objetiva por daños derivados de la Inteligencia Artificial: Un análisis crítico frente al sistema de riesgos
del Código Civil ecuatoriano
Strict liability for damages arising from artificial intelligence: A critical analysis in light of the risk framework of the Ecuadorian
Civil Code
María Estefania Baldeon-Navarrete
Rously Eedyah Atencio-González
Geovanna Michelle Nájera-Tello
Esperanza Jamilet Vera-Anchundia
50
liability solely on the causal link between the system’s operation and the damage caused, a
position supported by Marchisio (2021) and Erkan and Biswas (2025).
The third proposal, which is becoming increasingly dominant in European and Latin American
debates, opts for a hybrid regime that combines objective and subjective elements depending
on the type of system, the degree of autonomy, the sector of activity and the type of harm
caused; this alternative envisages the shared allocation of liability between the algorithm
developer, the operator deploying it and the user utilising it, assigning to each a share
proportional to their capacity to control the system, in line with the proposals of Díaz and Mitrani
(2025), Muñoz (2025) and Custers et al. (2025).
At the European level, the European Union’s AI Liability Directive and the reform of the
Defective Products Liability Directive have established that high-risk AI systems must be subject
to a strict liability regime with mandatory quantitative coverage limits, whilst low-risk systems
remain under the fault-based regime with the burden of proof reversed; this model has been
analysed from a comparative perspective as a benchmark for Latin American legal systems
lacking sector-specific regulations on AI, as noted by Solaiman and Malik (2025) and
Schoolcraft et al. (2026).
The risk regime in the Ecuadorian Civil Code
The Ecuadorian Civil Code incorporates the principle of liability for dangerous activities through
provisions on damage caused by animals, the collapse of buildings and industrial activities;
Ecuadorian legal doctrine has debated whether these clauses can be interpreted broadly to
include advanced technological systems as sources of risk analogous to those contemplated by
nineteenth-century legislators, and in this debate, the argument in favour of analogical
extension maintains that the basis of the rule is not the specific nature of the dangerous object,
but the general principle that whoever creates a risk must bear the harmful consequences
arising from it, a position shared by Guanoluisa Almache et al. (2021) and Zúñiga-Hurtado and
Hurtado-Macías (2023).
However, the doctrine of analogical extension faces significant technical objections: firstly, the
principle of specificity in civil torts requires that the factual basis of the rule be sufficiently
defined, as excessive flexibility in the rule may lead to legal uncertainty and contradictory
judicial decisions; secondly, the dangerous activities covered by the Ecuadorian Civil Code
presuppose the intervention of a human agent who controls the activity, a situation that does not
always hold true in highly autonomous AI systems; and thirdly, establishing the causal link in
cases of AI-related harm requires highly specialised technical knowledge that the Ecuadorian
judicial system has not yet systematically developed, as noted by Kostrzewa and Nowak (2022)
and Guamán-Quinzo and Batista-Hernández (2024).
These limitations have led some legal scholars to consider that the response cannot be purely
interpretative, but rather requires legislative reform that incorporates, within the Civil Code itself
or in a complementary special law, a differentiated liability regime for the development,
deployment and use of AI systems; a position shared by Concha-Flores (2024), Parra-
Sepúlveda and Concha-Machuca (2021), and Rodríguez-Corría and Alba-Cazales (2025).
METHOD
The research adopted a qualitative meta-analysis design, aimed at the systematic and critical
synthesis of published doctrinal, regulatory and jurisprudential contributions on civil liability for
damages arising from AI systems; qualitative meta-analysis, unlike the quantitative meta-
analysis typical of the health sciences, did not seek the statistical aggregation of numerical data,
but rather the interpretative integration of the results of multiple qualitative studies on the same
legal phenomenon, with the aim of constructing a broader and more far-reaching understanding
than that offered by each individual study, in line with the approach proposed by Parra-
Sepúlveda and Concha-Machuca (2021).
Revista Multidisciplinaria Perspectivas Investigativas
Multidisciplinary Journal Investigative Perspectives
Vol. 6(1), 47-57, 2026
La responsabilidad civil objetiva por daños derivados de la Inteligencia Artificial: Un análisis crítico frente al sistema de riesgos
del Código Civil ecuatoriano
Strict liability for damages arising from artificial intelligence: A critical analysis in light of the risk framework of the Ecuadorian
Civil Code
María Estefania Baldeon-Navarrete
Rously Eedyah Atencio-González
Geovanna Michelle Nájera-Tello
Esperanza Jamilet Vera-Anchundia
51
The literature search was conducted in the Scopus, Web of Science, Latindex, Google Scholar
and Dialnet databases, as well as the institutional repositories of UNIANDES and the
Ecuadorian and Latin American universities with the highest output in civil law and technology;
the search terms used were: ‘artificial intelligence and civil liability’, ‘AI-related damages and
civil law’, ‘strict liability and AI’, ‘Ecuadorian Civil Code and technology’, artificial intelligence
liability, AI tort law and their combined variants in Spanish, English and Portuguese; the search
period spanned from January 2015 to March 2026.
The inclusion criteria applied were as follows: a) publications in peer-reviewed journals with an
impact factor or indexed in recognised databases; b) texts that specifically addressed civil
liability in relation to AI systems; c) studies examining Ecuadorian, Latin American or
comparative civil law; and d) publications available in full text; the exclusion criteria comprised:
a) texts dealing exclusively with AI from a technical or computer science perspective, without
any legal connection; b) publications in languages other than Spanish, English or Portuguese;
and c) opinion pieces without verifiable theoretical backing.
Twenty-seven primary sources that met all the specified criteria were included; the analysis
proceeded in three successive phases: in the first, the thematic categories emerging from the
literature were identified, including models of attribution, causal link, distribution of liability and
regulatory proposals; in the second, the central arguments of each source were systematised in
relation to these categories; and in the third, the arguments were compared with one another
and contrasted with the current Ecuadorian regulatory framework, with the aim of drawing
inferences geared towards the formulation of reform proposals; methodological rigour was
ensured through theoretical triangulation and cross-verification of sources.
RESULTS
The meta-analysis of the reviewed literature identifies five recurring legal phenomena that
shape the landscape of civil liability for AI-related harm in the Ecuadorian context:
Regulatory phenomena identified in relation to the objective
The first is the phenomenon of diffuse causality; in machine learning AI systems, the causal
chain linking the system’s behaviour to the damage caused cannot be reconstructed using
traditional evidentiary tools, as the statistical and probabilistic nature of AI models means that a
single algorithmic decision may stem from millions of training data points, adjustments made by
multiple actors at different times, and contextual variables not foreseen in the original design;
such causal diffusion renders the requirement of a direct causal link, as demanded by Article
2214 of the Ecuadorian Civil Code, inoperative, given that the victim faces a burden of proof
that is practically impossible to meet without access to the system’s internal data, which in many
cases constitutes protected trade secrets, as noted by Čerka et al. (2015), Erkan and Biswas
(2025) and Rodríguez-Corría and Alba-Cazales (2025).
The second phenomenon is the subjective fragmentation of liability; AI systems are produced,
distributed and used by a chain of actors that includes algorithm developers, technology
infrastructure providers, operators who customise the system and end users, such that this
fragmentation prevents the identification of a single liable party when the harm arises from the
interaction between the different links in the chain;
the Ecuadorian Civil Code, designed to attribute liability to a specific individual, lacks
mechanisms for distributing liability amongst multiple actors with varying levels of control over
the agent causing the harm, as noted by Custers et al. (2025) and Muñoz (2025).
The third phenomenon is information asymmetry; developers of AI systems possess technical
knowledge regarding the system’s operation that victims, judges and ordinary court experts do
not possess, creating a structural disadvantage for the injured party, who must prove a causal
link whose understanding requires knowledge of advanced mathematics, statistics and
Revista Multidisciplinaria Perspectivas Investigativas
Multidisciplinary Journal Investigative Perspectives
Vol. 6(1), 47-57, 2026
La responsabilidad civil objetiva por daños derivados de la Inteligencia Artificial: Un análisis crítico frente al sistema de riesgos
del Código Civil ecuatoriano
Strict liability for damages arising from artificial intelligence: A critical analysis in light of the risk framework of the Ecuadorian
Civil Code
María Estefania Baldeon-Navarrete
Rously Eedyah Atencio-González
Geovanna Michelle Nájera-Tello
Esperanza Jamilet Vera-Anchundia
52
computer science; the rules of evidence in Ecuador’s General Organic Code of Procedure do
not provide for specific mechanisms to facilitate the presentation of evidence in this type of
litigation, a circumstance that exacerbates the victim’s situation, as documented by Kostrzewa
and Nowak (2022) and Aguas-Yáñez et al. (2024).
The fourth phenomenon is the obsolescence of the concept of fault for damages caused by
autonomous AI; when an AI system makes a harmful decision in the exercise of its autonomous
learning capacity, there is no voluntary human behaviour to which a finding of fault can be
applied, since the system does not act with negligence or wilful misconduct, but in accordance
with its own adaptive parameters, which may lead to consequences that no human actor
foresaw or could reasonably have foreseen; the application of the concepts of slight fault, gross
negligence and wilful misconduct under Article 29 of the Civil Code to this scenario is legally
strained and practically ineffective, in line with the arguments put forward by Narváez-López
(2019), Jácome Aguirre et al. (2023) and Díaz and Mitrani (2025).
The fifth phenomenon is the regulatory gap in coverage; Ecuador lacks specific legislation on
AI, whether in the form of general law or sector-specific regulation, and the Organic Law on the
Protection of Personal Data, enacted in 2021, whilst providing tools for the processing of
personal data by algorithmic systems, does not regulate civil liability arising from material or
non-pecuniary damage that such systems may cause; this absence creates a void that can only
be filled, provisionally and imperfectly, through the broad interpretation of the provisions of the
Civil Code, with all the risks of legal uncertainty that this entails, as warned by Gallegos-Unda et
al. (2025) and Orozco-Zavala et al. (2024).
Regulatory proposal for the incorporation of AI into the Ecuadorian Civil Code
Based on the phenomena identified, the research formulates a regulatory proposal structured
around five pillars, intended to be incorporated into the Ecuadorian Civil Code, either through
the reform of existing provisions on non-contractual liability or through the addition of a specific
title relating to liability for high-risk technological activities.
The first pillar proposes the incorporation of a clause on aggravated strict liability for high-risk AI
systems; this clause would establish that anyone who develops, operates or uses an AI system
capable of making decisions with significant effects on the life, physical integrity, property or
fundamental rights of third parties shall be liable for the damage caused by the operation of
such a system, regardless of the existence of fault, with proof of the damage and the causal link
between the operation of the system and the harm suffered being sufficient; the determination
of high-risk systems would be referred to an updatable technical regulation, drawn up by the
competent regulatory body, in line with the approach adopted by European regulation and with
international risk classification standards.
The second pillar proposes establishing a regime of joint and several liability between the
developer, operator and user, with rights of recourse amongst them proportional to their share
of control over the system; where it is not possible to determine which of the actors in the
technological chain caused the damage, they would all be jointly and severally liable to the
victim, and each party’s share of liability would be determined in a subsequent recourse action,
based on criteria such as technical capacity to intervene in the system, the economic benefit
obtained and the level of information available on the risks at the time of deployment, as
proposed by Custers et al. (2025) and Muñoz (2025).
The third pillar proposes the reversal of the burden of proof regarding the causal link for
damages caused by AI systems; under this rule, once the victim has established the damage
suffered and the interaction with the system, it will be incumbent upon the defendant to
demonstrate that the damage was not caused by the system’s operation or that it operated
within the safety parameters required by applicable regulations; this reversal is justified by the
information asymmetry that characterises such litigation and by the procedural principle of ease
of proof, which assigns the burden of proof to the party with better access to the means of
Revista Multidisciplinaria Perspectivas Investigativas
Multidisciplinary Journal Investigative Perspectives
Vol. 6(1), 47-57, 2026
La responsabilidad civil objetiva por daños derivados de la Inteligencia Artificial: Un análisis crítico frente al sistema de riesgos
del Código Civil ecuatoriano
Strict liability for damages arising from artificial intelligence: A critical analysis in light of the risk framework of the Ecuadorian
Civil Code
María Estefania Baldeon-Navarrete
Rously Eedyah Atencio-González
Geovanna Michelle Nájera-Tello
Esperanza Jamilet Vera-Anchundia
53
evidence, in line with the arguments put forward by Bottomley and Thaldar (2023) and Wang
and Zhou (2025).
The fourth pillar proposes the creation of a subsidiary guarantee fund for victims of AI-related
harm, covering cases where the liable party is insolvent or cannot be identified; this fund would
be financed through a system of mandatory contributions payable by developers and operators
of systems classified as high or medium risk, proportional to the turnover generated by the
activity and the risk level of the deployed system; the mechanism has precedents in the field of
road traffic accidents in several Latin American countries, so its extension to the field of AI is
legally consistent with the region’s civil law tradition.
The fifth pillar proposes incorporating an obligation of transparency and auditability as a
condition for mitigated liability; developers and operators of AI systems who have complied with
the duties of registration, documentation, independent auditing and risk communication set out
in the regulations could benefit from a regime of quantitatively limited liability, which seeks to
balance the protection of victims with the incentive for the responsible development of AI
technologies, preventing a regime of unlimited liability from restricting technological innovation
in Ecuador, in line with the guidance provided by Reed (2018) and Marchisio (2021).
DISCUSSION
The results of the meta-analysis enable a productive dialogue with theory and with precedents
in comparative law; the observation of the phenomenon of diffuse causality confirms the thesis
of Čerka et al. (2015), who noted at an early stage that causality is the Gordian knot of AI
liability, a thesis complemented by the more recent contributions of Erkan and Biswas (2025),
who emphasise that causal diffusion is exacerbated as AI systems incorporate unsupervised
learning capabilities; the Ecuadorian Civil Code, structured around a notion of direct and linear
causality, does not offer adequate tools to address this reality without substantive reforms.
The phenomenon of subjective fragmentation of liability engages directly with the proposal by
Custers et al. (2025), who have coined the concept of ‘liability gaps’ to refer to regulatory
spaces in which no actor in the technological chain can be held liable under current rules; the
proposal for joint and several liability with rights of recourse formulated in this research takes up
these authors’ insight and adapts it to the Ecuadorian civil law tradition, which already
recognises joint and several liability as a mechanism for enhanced protection of victims in other
areas of private law.
The debate on the obsolescence of fault as the basis for liability finds particular resonance in
this research with the proposal by Marchisio (2021), who advocates the adoption of no-fault
liability rules as the solution most consistent with the precautionary principle that should guide
the deployment of transformative technologies; doctrinal resistance to this proposal, which fears
a disincentive effect on innovation, has been qualified in recent empirical studies, which have
found no evidence that strict liability regimes significantly reduce investment in technological
innovation when accompanied by compulsory insurance mechanisms, as documented by Díaz
and Mitrani (2025).
The proposal to reverse the burden of proof is, in turn, aligned with trends in comparative law
analysed by Bottomley and Thaldar (2023) in the field of healthcare and by Wang and Zhou
(2025) in relation to large-scale AI systems; both studies have demonstrated that the reversal of
the burden of proof, combined with a transparency obligation regarding the system’s operation,
constitutes the most effective mechanism for reducing the information asymmetry that
characterises AI liability litigation.
A comparison with the European model reveals that Ecuador could benefit from adopting a
regulatory approach based on risk classification, similar to that of the European Union’s AI
Regulation; however, it is important to note the need to adapt this model to the specific
conditions of the Ecuadorian context, which include more limited institutional oversight capacity,
Revista Multidisciplinaria Perspectivas Investigativas
Multidisciplinary Journal Investigative Perspectives
Vol. 6(1), 47-57, 2026
La responsabilidad civil objetiva por daños derivados de la Inteligencia Artificial: Un análisis crítico frente al sistema de riesgos
del Código Civil ecuatoriano
Strict liability for damages arising from artificial intelligence: A critical analysis in light of the risk framework of the Ecuadorian
Civil Code
María Estefania Baldeon-Navarrete
Rously Eedyah Atencio-González
Geovanna Michelle Nájera-Tello
Esperanza Jamilet Vera-Anchundia
54
an emerging technological development sector and a population with unequal access to digital
technologies, as the uncritical transposition of European models without such adaptation could
result in unenforceable rules or ones that favour technology firms at the expense of individuals’
rights, as warned by Solaiman and Malik (2025) and Gallegos-Unda et al. (2025).
The discussion also highlights an unresolved tension in the literature between the need to
protect victims of AI-related harm and the need not to restrict technological development
through an excessively severe liability regime; the proposal put forward in this research seeks to
articulate a balanced solution through a scheme of differentiated liability based on the system’s
risk level, the obligation of transparency as a condition for mitigation, and the creation of a
subsidiary guarantee fund. This framework translates the best practices identified in the
international literature into the context of Ecuadorian civil law, whilst fully respecting its
principles and doctrinal tradition, in line with the guidance provided by Concha-Flores (2024)
and Parra-Sepúlveda and Concha-Machuca (2021).
Finally, it should be noted that regulatory reform, whilst indispensable, is not sufficient on its
own; the effectiveness of the civil liability regime for AI-related damages also depends on
strengthening the technical capabilities of the Ecuadorian judicial system to understand and
assess technological evidence, on the specialised training of judges in technology law, and on
the development of a legal culture of precaution in the face of innovations with high social
impact; without these enabling conditions, even the best regulatory design risks remaining a
mere statement of principles with no practical translation into the effective protection of victims’
rights, as noted by Zabala-Leal and Gómez-Macfarland (2024) and Narváez-López (2019).
CONCLUSION
An analysis of strict civil liability for damages arising from artificial intelligence in relation to the
risk framework of the Ecuadorian Civil Code shows that the current regulatory framework is
structurally insufficient to attribute liability when the agent causing the damage is an algorithmic
system equipped with autonomous learning capabilities; Articles 2214 and 2229 of the Civil
Code, designed to regulate voluntary human conduct and dangerous activities of foreseeable
control, do not address the diffuse causality inherent in artificial intelligence systems, the
subjective fragmentation between developers, operators and users, nor the information
asymmetry that places the victim at an insurmountable evidential disadvantage under ordinary
procedural rules; such a regulatory gap does not constitute a minor deficiency susceptible to
hermeneutic correction, but rather a coverage gap that compromises the legal certainty of
individuals and discourages responsible technological development in the country.
Against this backdrop, the reviewed literature converges in pointing out that strict liability,
structured on the principle that whoever introduces a risk into society must bear the harmful
consequences regardless of their diligence, represents the mechanism most consistent with the
technological nature of the causative agent; its effective application in the Ecuadorian context
requires, however, that it be complemented by a regime of joint and several liability amongst the
actors in the technological chain, the reversal of the burden of proof regarding the causal link,
the creation of a subsidiary guarantee fund for victims of insolvent or unidentifiable liable
parties, and an obligation of transparency and auditability, compliance with which operates as a
condition for the quantitative mitigation of liability; this set of mechanisms, systematically
coordinated, translates the most established practices of comparative law into the framework of
Ecuadorian civil law, with full respect for the region’s civil law tradition.
Legislative reform incorporating these key elements, whether through amendment of the Civil
Code or via a special supplementary law on liability for high-risk technological activities,
constitutes a necessary but not sufficient condition; its practical effectiveness depends on
strengthening the technical capabilities of the Ecuadorian judicial system to assess algorithmic
evidence, on the specialised training of judges in technology law, and on the institutional
development of a regulatory body with the technical expertise to classify artificial intelligence
Revista Multidisciplinaria Perspectivas Investigativas
Multidisciplinary Journal Investigative Perspectives
Vol. 6(1), 47-57, 2026
La responsabilidad civil objetiva por daños derivados de la Inteligencia Artificial: Un análisis crítico frente al sistema de riesgos
del Código Civil ecuatoriano
Strict liability for damages arising from artificial intelligence: A critical analysis in light of the risk framework of the Ecuadorian
Civil Code
María Estefania Baldeon-Navarrete
Rously Eedyah Atencio-González
Geovanna Michelle Nájera-Tello
Esperanza Jamilet Vera-Anchundia
55
systems according to their level of risk; without these enabling conditions, even the best
regulatory design will be reduced to a mere statement of principles with no effective translation
into the protection of the rights of those who suffer the harm caused by these systems.
FUNDING
Non-monetary
CONFLICT OF INTEREST
There is no conflict of interest with persons or institutions linked to the research.
ACKNOWLEDGEMENTS
To the Ecuadorian justice system.
REFERENCES
Aguas-Yáñez, F. J., López-Cando, D. P., Moreano-Santos, A. X., & Piray-Rodríguez, P. O.
(2024). La inteligencia artificial en la judicatura penal: Evaluación de beneficios y retos
[Artificial intelligence in the criminal judiciary: Assessing benefits and challenges].
Verdad y Derecho. Revista Arbitrada de Ciencias Jurídicas y Sociales, 3(especial4),
178184. https://doi.org/10.62574/cgnnb341
Bottomley, D., & Thaldar, D. (2023). Liability for harm caused by AI in healthcare: An overview
of the core legal concepts. Frontiers in Pharmacology, 14, 1297353.
https://doi.org/10.3389/fphar.2023.1297353
Čerka, P., Grigienė, J., & Sirbikytė, G. (2015). Liability for damages caused by artificial
intelligence. Computer Law & Security Review, 31(3), 376389.
https://doi.org/10.1016/j.clsr.2015.03.008
Concha-Flores, L. F. (2024). Inteligencia artificial, enfoque de riesgos y responsabilidad civil:
Aspectos centrales para una razonabilidad práctica [Artificial intelligence, risk-based
approach, and civil liability: Key aspects for practical reasonableness]. Sapientia Iuris,
(1), 140169. https://doi.org/10.5281/zenodo.14735552
Custers, B., Lahmann, H., & Scott, B. I. (2025). From liability gaps to liability overlaps: Shared
responsibilities and fiduciary duties in AI and other complex technologies. AI & Society,
40(5), 40354050. https://doi.org/10.1007/s00146-024-02137-1
Díaz, M. P., & Mitrani, C. (2025). La responsabilidad por daños causados por el uso de IA:
¿Responsabilidad objetiva, subjetiva o híbrida? [Liability for damages caused by the
use of AI: Strict, subjective, or hybrid liability?]. Revista Jurídica de la Universidad de
San Andrés, (20), 169189. https://doi.org/10.64928/cpp9nf62
Erkan, F., & Biswas, T. D. (2025). Artificial intelligence and liability for damages. International
Journal of Social Sciences, 9(41), 297343. https://doi.org/10.52096/usbd.9.41.16
Escobar-Escobar, D. A., Luna-Sánchez, A. C., Viteri-Tacoamán, S. A., & García-Sanipatín, L.
R. (2024). Efectos de la inteligencia artificial en la administración de justicia [Effects of
artificial intelligence on the administration of justice]. Verdad y Derecho. Revista
Arbitrada de Ciencias Jurídicas y Sociales, 3(especial2), 4149.
https://doi.org/10.62574/mr9wjn56
Gallegos-Unda, V. C., Proaño-Reyes, G. M., & Castro-Sánchez, F. J. (2025). Beneficios y
riesgos de la inteligencia artificial en la protección de datos personales en Ecuador
[Benefits and risks of artificial intelligence in personal data protection in Ecuador].
Noesis, 7(esp2), 13301347. https://doi.org/10.35381/noesisin.v7i2.658
Revista Multidisciplinaria Perspectivas Investigativas
Multidisciplinary Journal Investigative Perspectives
Vol. 6(1), 47-57, 2026
La responsabilidad civil objetiva por daños derivados de la Inteligencia Artificial: Un análisis crítico frente al sistema de riesgos
del Código Civil ecuatoriano
Strict liability for damages arising from artificial intelligence: A critical analysis in light of the risk framework of the Ecuadorian
Civil Code
María Estefania Baldeon-Navarrete
Rously Eedyah Atencio-González
Geovanna Michelle Nájera-Tello
Esperanza Jamilet Vera-Anchundia
56
Guamán-Quinzo, C. del R., & Batista-Hernández, N. (2024). Modificación del Código Civil
ecuatoriano para incrementar el reconocimiento del divorcio incausado [Amendment of
the Ecuadorian Civil Code to increase recognition of no-fault divorce]. Revista Lex,
7(27), 15261545. https://doi.org/10.33996/revistalex.v7i27.260
Guanoluisa-Almache, F. A., Crespo-Berti, L. A., & Liscano-Chapeta, C. J. (2021). Principio
constitucional de responsabilidad judicial en el Distrito de Imbabura 20152021
[Constitutional principle of judicial liability in the Imbabura District 20152021]. Dilemas
Contemporáneos: Educación, Política y Valores, 8(spe4), 00051.
https://doi.org/10.46377/dilemas.v8i.2804
Jácome-Aguirre, G., Pérez-Rosales, D., & Arguello-Almeida, A. (2023). ¿Por qué el Código Civil
ecuatoriano equipara la culpa grave y el dolo? Análisis jurídico sobre la culpabilidad, su
aplicación y finalidad [Why does the Ecuadorian Civil Code equate gross negligence
and intent? Legal analysis of culpability, its application, and purpose]. USFQ Law
Review, 10(2). https://doi.org/10.18272/ulr.v10i2.3000
Jaramillo-Valdivieso, J. A. (2024). Los robots como animales desde la responsabilidad civil
[Robots as animals from the perspective of civil liability]. Debate Jurídico Ecuador, 7(2),
231243. https://doi.org/10.61154/dje.v7i2.3420
Kostrzewa, Ł., & Nowak, R. (2022). Polish court ruling classification using deep neural
networks. Sensors, 22(6), 2137. https://doi.org/10.3390/s22062137
Marchisio, E. (2021). In support of "no-fault" civil liability rules for artificial intelligence. SN Social
Sciences, 1(2), 54. https://doi.org/10.1007/s43545-020-00043-z
Muñoz, R. A. (2025). Responsabilidad civil por el uso de IA en plataformas digitales [Civil
liability for the use of AI in digital platforms]. Anuario del Centro de Investigaciones
Jurídicas y Sociales, 23(23), 457471.
Narváez-López, C. (2019). La inteligencia artificial entre la culpa, la responsabilidad objetiva y
la responsabilidad absoluta en los sistemas jurídicos del derecho continental y
anglosajón [Artificial intelligence between fault, strict liability, and absolute liability in civil
law and common law systems].
Orozco-Zavala, E. R., Amores-Castillo, A. L., Donoso-Beltrán, W. J., & Silva-Andrade, G. J.
(2024). La inteligencia artificial (IA) como amenaza del sistema jurídico [Artificial
intelligence (AI) as a threat to the legal system]. Verdad y Derecho. Revista Arbitrada
de Ciencias Jurídicas y Sociales, 3(especial4), 185192.
https://doi.org/10.62574/cmftgr38
Parra-Sepúlveda, D., & Concha-Machuca, R. (2021). Inteligencia artificial y derecho:
Problemas, desafíos y oportunidades [Artificial intelligence and law: Problems,
challenges, and opportunities]. Vniversitas, 70, 125.
https://doi.org/10.11144/Javeriana.vj70.iadp
Reed, C. (2018). How should we regulate artificial intelligence? Philosophical Transactions of
the Royal Society A, 376(2128), 112. https://www.jstor.org/stable/26601758
Rodríguez-Corría, R., & Alba-Cazales, D. (2025). La responsabilidad civil derivada del uso de
sistemas de inteligencia artificial: Un conflicto actual y preocupante [Civil liability arising
from the use of artificial intelligence systems: A current and concerning conflict]. Debate
Jurídico Ecuador, 8(3), 512534. https://doi.org/10.61154/dje.v8i3.4162
Schoolcraft, D., Meltzer, A. C., Sangal, R., Terry, A. T., Robertson, K., Buckland, D., Motalib, S.,
Genes, N., Vukmir, R., Waseem, T., & ACEP AI Task Force. (2026). Health Insurance
Portability and Accountability Act liability in the age of generative artificial intelligence.
Journal of the American College of Emergency Physicians Open, 7(2), 100317.
https://doi.org/10.1016/j.acepjo.2025.100317
Revista Multidisciplinaria Perspectivas Investigativas
Multidisciplinary Journal Investigative Perspectives
Vol. 6(1), 47-57, 2026
La responsabilidad civil objetiva por daños derivados de la Inteligencia Artificial: Un análisis crítico frente al sistema de riesgos
del Código Civil ecuatoriano
Strict liability for damages arising from artificial intelligence: A critical analysis in light of the risk framework of the Ecuadorian
Civil Code
María Estefania Baldeon-Navarrete
Rously Eedyah Atencio-González
Geovanna Michelle Nájera-Tello
Esperanza Jamilet Vera-Anchundia
57
Solaiman, B., & Malik, A. (2025). Regulating algorithmic care in the European Union: Evolving
doctorpatient models through the Artificial Intelligence Act and liability directives.
Medical Law Review, 33(1), fwae033. https://doi.org/10.1093/medlaw/fwae033
Wang, Y., & Zhou, Z. (2025). Medical damage liability risk of medical AI: From the perspective
of DeepSeek's large-scale deployment in Chinese hospitals. Frontiers in Public Health,
13, 1726205. https://doi.org/10.3389/fpubh.2025.1726205
Zabala-Balladares, K. L., Moncayo-Morlas, N. K., Jiménez-Andrade, W. G., & Ros-Álvarez, D.
(2024). Ética y responsabilidad en el uso de la inteligencia artificial en procesos
judiciales [Ethics and responsibility in the use of artificial intelligence in judicial
processes]. Verdad y Derecho. Revista Arbitrada de Ciencias Jurídicas y Sociales,
3(especial2), 239246. https://doi.org/10.62574/bdvzg165
Zabala-Leal, T. D., & Gómez-Macfarland, C. A. (2024). La responsabilidad civil y la ética en la
inteligencia artificial: Una revisión sistemática de las ideas del período 20182023 [Civil
liability and ethics in artificial intelligence: A systematic review of ideas from 2018
2023]. IUSTA, 60, 6693. https://doi.org/10.15332/25005286.9964
Zúñiga-Hurtado, E. P., & Hurtado-Macías, N. de J. (2023). La acción rescisoria pauliana desde
el código civil ecuatoriano [The Paulian rescissory action under the Ecuadorian Civil
Code]. Iustitia Socialis, 8(14), 1728. https://doi.org/10.35381/racji.v8i14.2414
Copyright: 2026 By the authors. This article is open access and distributed under the terms and conditions of
the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0) licence
https://creativecommons.org/licenses/by-nc-sa/4.0/