Jump to Main Menu Skip to Main Content
Table of Contents
Back to Issue Page
Issue 3

International Commercial Arbitration & Technology: An Authors' Interview with Generative Artificial Intelligence

Introduction

See link here: A Conversation with AI Interview ANNEX

It has been stated that generative artificial intelligence “is not yet accurate enough to provide legal information directly to laypeople.”2Jinzhe Tan et al., ChatGPT as an Artificial Lawyer?, in AI4AJ 2023, Workshop on Artificial Intelligence for Access to Justice 7 (L. Karl Branting, ed., 2023), CEUR Workshop Proceedings, https://ceur-ws.org/Vol-3435/short2.pdf.2  On the other hand, many sources label the introduction of generative AI as a milestone in the legal profession and international commercial arbitration.3See, e.g., Claire Morel de Westgaver, Canvassing Views on AI in IA: The Rise of Machine Learning, Kluwer Arbitration Blog (July 12, 2023), https://arbitrationblog.kluwerarbitration.com/2023/07/12/canvassing-views-on-ai-in-ia-the-rise-of-machine-learning.3  The authors of this paper took the task to conduct an “interview” with the generative AI to assess what it “thinks” about its newfound role in international commercial arbitration.

The quotation marks are deliberate.  The “AI revolution” of 2022–2023 refers to the emergence of generative AI software, i.e., large language models (“LLM”) that could generate new data (in particular text) based on generative models.4See Erik Brynjolfsson et al., Generative AI at Work 1 n.1 (Nat'l Bureau of Econ. Rsch., Working Paper No. 31161 2023).4  The authors share the opinion that generative AI should not be compared with artificial general intelligence,5See, e.g., Andrzej Kisielewicz, Bajki o sztucznej inteligencji i prawdziwe zagrożenia [Fairy tales of AI and real threats], Wszystko Co Najważniejsze (July 18, 2023), https://wszystkoconajwazniejsze.pl/andrzej-kisielewicz-bajki-o-sztucznej-inteligencji-i-prawdziwe-zagrozenia/.5 i.e., software or machine that could accomplish any unspecific task that human beings could perform or even surpass human performance.6Sam S. Adams et al., Mapping the Landscape of Human-Level Artificial General Intelligence, AI Mag., Spring 2012, at 25, 26.6  Hence, one could not “talk” or “interview” generative AI software.  Nor could such software share “its views” on a given subject. Accordingly, one could only input data to software (i.e., an interview question) and obtain other data in return (i.e., generative AI’s answer).

Methodology

For the purposes of producing a scholarly article, conducting a semi-structured interview with generative AI requires adopting scientific discipline and acknowledging certain limitations.  The basic rules and choices followed in this article will be explained below.

  1. This paper builds upon the authors’ prior research, resulting in the book chapter related to a phenomenon of technology and its influence on international arbitration that was released a few months prior to a global boom of generative AI.1See generally Piotr Wilinski & Maciej Durbas, Data Mining, Text Analytics and International Commercial Arbitration, in International Arbitration and Technology (Pietro Ortolani et al. eds., 2021).1  Additionally, the authors presented and discussed their initial reflections on the influence of generative AI on international arbitration during an academic seminar organized by Radboud University in Nijmegen on May 19, 2023.
  2. The authors have not involved the use of generative AI in the main body of this article, including the methodology section as well as conclusions. Any answer provided by generative AI is clearly indicated as such.
  3. The authors selected two independent generative AI tools, namely: (i) ChatGPT 4.0 (paid option) and (ii) Google Bard (free).  The authors considered that testing the content of the questions and answers is sufficiently made with the help of at least two generative AI entities.
  4. The authors decided to test three working hypotheses:
    1. the generative AI is a tool unready to make independent decisions in the arbitral process.
    2. the data provided by generative AI based on the same question may vary,
    3. generative AI may “hallucinate,” i.e., give answers confidently that are not justified in the training data provided to the AI,2See Sai Anirudh Athaluri et al., Exploring the Boundaries of Reality: Investigating the Phenomenon of Artificial Intelligence Hallucination in Scientific Writing Through ChatGPT References, 15 Cureus 1 (2023).2 and these “hallucinations” also refer to the issues of international arbitration.
  5. To test these hypotheses, the authors prepared a list of ten main questions with further sub-questions corresponding to initial reflections in their previous research and works and related to the conduct of counsels and arbitrators in international arbitration and the use of AI in the proceedings.
  6. The authors asked questions one by one.
  7. The authors prompted the AI generators to test working hypotheses and match the paper’s word limit. The authors aimed at asking only general questions and omitted any leading ones.
  8. The authors then compared and reproduced the answers in the appendix to this paper. Finally, they were able to test the working hypotheses.

The authors understand that from a statistical perspective, one should consider interviewing more AI generators or sample more answers.  However, given that the exercise in question is not a strictly scientific experiment, but more of a cognitive curiosity, they decided to limit themselves to the methodology in question.  In any event, they are aware of possible criticism and constructive feedback.

The Question Set

An arbitration lawyer interviews you on how artificial intelligence (“AI”) can be used in the international arbitration process.  Give a concise and easy-to-follow answer to all the following questions (no more than 200 words).

A. General questions about the possible use of data mining, text analytics, or generative AI tools in international arbitration

1. Is there any legal or soft-law framework for using data mining, text analytics, or generative AI tools in international commercial arbitration?

How could general principles of arbitration shape the use of these tools in arbitration?

Should there be any law or at least soft-law regulation of using these tools in international commercial arbitration?  What should be the regulators’ aim?

2. Should the use of data mining, text analytics, or generative AI tools be allowed at all? If so, should its users disclose this fact?

3. Is there enough data in international arbitration for data mining, text analytics, or generative AI tools to present data that the stakeholders could use?

Is there enough data from international commercial arbitration compared to international treaty arbitration?

Is the data comparable when the awards are rendered in a different form and structure, in different languages, and under different rules?

4. What are the ethical considerations when using data mining, text analytics, or generative AI tools in international arbitration?

5. Should stakeholders in arbitration be concerned when they input sensitive data to data mining, text analytics, or generative AI tools?

6. Could AI make the decisions for stakeholders in arbitration (parties, counsel, and arbitrators)?

B. Detailed questions about the possible use of data mining, text analytics, or generative AI tools in international arbitration

7. Could parties use data mining, text analytics, or generative AI tools to select the arbitral seat or arbitral rules for their dispute?

Would you be able to propose an arbitral seat or set of rules for a given dispute based on your training data and data input by a given stakeholder?

8. Could parties use data mining, text analytics, or generative AI tools to draft an arbitration clause either from the beginning or by amending an existing template?

Would you be able to draft an arbitration clause for a given dispute based on your training data and data input by a given stakeholder?

9. Could stakeholders in arbitration use data mining, text analytics, or generative AI tools to assess a potential conflict of interest between either counsel and a party or an arbitrator with parties, counsel, experts, or other arbitrators?

Soft-law documents that refer to conflict of interest often use vague terms like “significant interest in a party,” “regularly advises,” “close relationship,” “significant commercial relationship,” “related issue,” “significant fees or other revenues,” “regular basis.”  Do you think that generative AI could replace a human in assessing whether a given situation qualifies under vague terms?

10. Could parties use data mining, text analytics, or generative AI tools to select an arbitrator for their dispute?

Would these tools consider that arbitrators might change their mind or have rendered an award because of the particular circumstances of the case or do not entirely agree with every word drafted by their co-arbitrators?

11. Could parties (or e.g., litigation financers) use data mining, text analytics, or generative AI tools to assess the chances of winning a case in international arbitration?

What documents or information would the parties need to deliver for the AI to be able to do so?

How would AI tackle the use of vague legal definitions (e.g., “foreseeable,” “fault,” “reasonable”)?  Does the understanding of these vague terms and their application to a given case vary significantly depending on a given case, its facts and applicable law?

12. Could arbitrators use data mining, text analytics, or generative AI tools to examine the evidence in international arbitration or in drafting awards or other documents?

13. Could parties use data mining, text analytics, or generative AI tools to draft more persuasive submissions in arbitration?

14. Could arbitral institutions use data mining, text analytics, or generative AI tools to critically scrutinize an arbitral award?

Conversation No. 1: ChatGPT 4.0

An arbitration lawyer interviews you on how AI can be used in the international arbitration process.  Give a concise and easy-to-follow answer to all the following questions (no more than 200 words).

A. General questions about the possible use of data mining, text analytics, or generative AI tools in international arbitration

1. Is there any legal or soft-law framework for using data mining, text analytics, or generative AI tools in international commercial arbitration?

As of my last update in September 2021, there isn't a universally recognized legal or soft-law framework specifically tailored for the use of AI, data mining, text analytics, or generative AI tools in the context of international commercial arbitration.  However, several principles of international arbitration, such as the duty to act fairly and impartially and to treat parties with equality, would apply to the use of any tool or method in the arbitration process, including AI.

That said, various international arbitration institutions and bodies have begun exploring and commenting on the use of technology in the process.  The ICCA-IBA Joint Task Force on Data Protection in International Arbitration, for example, has published a roadmap on data protection.  While this doesn't focus on AI specifically, it does touch upon technological advancements.

It's worth noting that the adoption and integration of AI tools would have to respect the confidentiality of proceedings, the protection of personal data, and the general principles of procedural fairness.  As the field evolves, it's plausible that more specific guidelines or frameworks might emerge to address these new technological dimensions in arbitration.

How could general principles of arbitration shape the use of these tools in arbitration?

Equality and Fair Treatment:  All parties must be given an equal opportunity to benefit from AI tools.  If one party employs AI for analysis, the other should be made aware and possibly given access to similar tools.  It ensures that neither side gains an unfair advantage, upholding the essence of equal treatment.

Impartiality and Independence:  AI models can have inherent biases based on their training data.  It’s crucial that any AI tool used doesn’t favor one party over the other.  Transparent methodologies and possibly third-party validations can ensure the neutrality of AI outputs.

Confidentiality:  Arbitral proceedings often require confidentiality.  AI tools need to be secure, with robust measures against unauthorized access or data breaches.  They should also not store sensitive information beyond the duration of the process.

Transparency:  While AI’s operations might be complex, its application in arbitration should be transparent.  Both parties should be informed about how and why a particular AI tool is being used, ensuring there are no “black box” operations.

Procedural Efficiency:  AI can analyze vast datasets quickly, which could expedite certain stages of arbitration.  However, speed shouldn't compromise accuracy or fairness.

Right to be Heard:  If AI is utilized, especially in evidence evaluation, parties must have the chance to challenge or question its outputs.  It safeguards the principle that all parties have a voice in the process.

In summary, while AI offers promising advancements for arbitration, its implementation should always respect the fundamental principles that make arbitration fair and just.

Should there be any law or at least soft-law regulation of using these tools in international commercial arbitration?  What should be the regulators’ aim?

Yes, given the increasing influence and potential implications of AI tools, there should be at least soft-law regulations for their use in international commercial arbitration.  Soft-law instruments, like guidelines or best practices, can offer a flexible yet structured approach to integrating AI into arbitration.

Regulators’ Aim:  The primary aim should be to ensure:

Fairness and Equity:  AI tools should be accessible and understandable to both parties.  Any AI application must be transparent and free from biases, ensuring equal treatment of parties.

Maintaining Confidentiality:  Regulations should enforce robust data protection and security measures, safeguarding the confidential nature of arbitral proceedings.

Transparency and Understandability:  Parties should be informed about AI’s role, how it operates, and its implications.  This ensures the process remains transparent and the outcomes are trustworthy.

Enhancing Procedural Efficiency:  AI’s potential in expediting processes, like document review or data analysis, should be harnessed without compromising the quality of the arbitration.

Ethical Use:  Guidelines should promote ethical considerations, addressing concerns like potential misuse, ensuring AI tools align with the overarching principles of arbitration.

In essence, any regulation should strike a balance between leveraging the benefits of AI and preserving the core principles of arbitration, ensuring justice, fairness, and efficiency in the process.

2. Should the use of data mining, text analytics, or generative AI tools be allowed at all? If so, should its users disclose this fact?

Should AI Tools be Allowed?:  Yes, AI tools like data mining and text analytics can offer significant advantages in international arbitration.  They can enhance efficiency, particularly in tasks like document review, evidence analysis, and identifying patterns in large datasets.  However, the usage should always prioritize the fairness, integrity, and transparency of the arbitration process.

Disclosure of AI Tool Usage:  It’s advisable to disclose the use of AI tools in arbitration for several reasons:

Transparency and Trust:  Arbitration relies on the trust and confidence of its stakeholders.  Disclosing the use of AI tools can foster transparency, ensuring parties are aware of how evidence or arguments are being processed.

Equality:  Disclosure ensures both parties are on an equal footing.  If one party employs AI tools, the other should have the opportunity to understand, challenge, or potentially utilize similar tools.

Integrity of the Process:  AI’s outputs, especially from generative tools, can be influenced by their training data and algorithms.  Disclosing their usage allows parties to question or validate these outputs, preserving the process's integrity.

In essence, while the advantages of AI tools are evident, their usage should be disclosed to maintain the core principles of transparency, fairness, and trust in international arbitration.

3. Is there enough data in international arbitration for data mining, text analytics, or generative AI tools to present data that the stakeholders could use?

International arbitration, by its nature, is often confidential, which can limit the amount of publicly available data.  However, within individual proceedings, significant amounts of data, such as case documents, evidentiary submissions, and transcripts, may accumulate.

There are some repositories and databases, especially from major arbitration institutions, that provide redacted awards, decisions, or other relevant materials.  Additionally, legal databases and publications, though not exhaustive, contain collections of arbitral awards and related literature.

For stakeholders within a specific arbitration, the volume of data generated could be substantial and suitable for data mining or text analytics.  For example, AI tools can be employed to review large sets of documents, identifying patterns, key terms, or relevant passages.

However, for generative AI tools that require vast datasets to generate new content or predict outcomes, the confidentiality and relatively fragmented nature of international arbitration data might present challenges.

In conclusion, while there may be enough data within specific proceedings for data analysis or text mining, the broader application of generative AI tools in predicting trends or outcomes in international arbitration might require more comprehensive data sets, which are currently limited by confidentiality and the decentralized nature of the field.

Is there enough data from international commercial arbitration compared to international treaty arbitration?

International commercial arbitration and international treaty arbitration (often investor-state dispute settlement or ISDS) both produce significant amounts of data.  However, there are notable differences in their availability:

Confidentiality and Transparency:

International Commercial Arbitration:  It predominantly involves disputes between private entities and is characterized by a higher degree of confidentiality.  As a result, many awards and decisions aren’t publicly accessible, restricting the availability of data.

International Treaty Arbitration:  In ISDS cases, where investors and states are parties, there’s often greater public interest, leading to increased transparency initiatives.  Many treaties or arbitration rules now mandate or encourage the publication of awards and decisions, making more data available compared to commercial arbitration.

Volume of Cases:

International Commercial Arbitration:  It is more common and frequent than treaty arbitration.  However, the confidentiality aspect often hides the actual volume of data.

International Treaty Arbitration:  Though fewer in number, the decisions, awards, and associated documents from these cases are often more detailed and, thanks to transparency initiatives, more accessible.

In conclusion, while international commercial arbitration might generate more cases, the data from international treaty arbitration is often more accessible due to its semi-public nature and the transparency initiatives associated with it.

Is the data comparable when the awards are rendered in a different form and structure, in different languages, and under different rules?

When considering the comparability of data derived from arbitration awards that differ in form, structure, language, and rules, several challenges and considerations arise:

Form and Structure:  Arbitration awards may vary greatly in their formatting and structure, depending on the arbitral institution, the preferences of the arbitrators, or the legal traditions involved.  While AI tools can help standardize and categorize diverse data, the inconsistencies in form might make direct comparisons more challenging.

Languages:  Awards rendered in different languages pose a natural barrier to direct comparison.  Advanced AI tools can provide translation, but nuances, legal terminologies, or cultural context might get lost or misinterpreted.  This variability can affect data analysis outcomes.

Different Rules:  Arbitrations governed by different rules might have distinct procedural and substantive elements.  For AI to draw meaningful insights, it should be trained to recognize and adjust for these differences.

In summary, while the data from diverse arbitration awards can provide valuable insights, direct comparisons are challenging due to the variability in form, language, and rules.  Advanced AI tools can assist in standardizing and translating the data, but careful consideration is necessary to ensure accurate and meaningful analysis.

4. What are the ethical considerations when using data mining, text analytics, or generative AI tools in international arbitration?

Bias and Fairness:  AI tools, particularly those trained on historical data, can inadvertently perpetuate existing biases.  It’s crucial to ensure that these tools do not favor one party over another, undermining the core principle of equal treatment in arbitration.

Transparency and Disclosure:  Parties have the right to know how decisions affecting their case are made.  If AI tools influence the arbitration process or the decision-making of the tribunal, this should be transparently disclosed to all stakeholders.

Confidentiality and Data Security:  Arbitration often involves sensitive information.  Ensuring the AI tools used uphold the strict confidentiality obligations is paramount.  Additionally, there's the need to protect data from breaches or unauthorized access.

Dependence on Technology:  Over-reliance on AI can lead to human arbitrators abdicating their judgment.  It’s essential to remember that AI is a tool, not a replacement for human expertise and judgment.

Accessibility:  If one party employs advanced AI tools, this could create an imbalance if the opposing party lacks access to similar tools.  Ensuring equal access or opportunities to challenge AI-generated data is critical to uphold fairness.

Accountability:  In cases where AI tools provide incorrect or misleading outputs, there should be mechanisms to challenge or review such outputs.  Holding human actors accountable for AI's actions is essential to maintain the integrity of the process.

In essence, while AI tools offer potential benefits for international arbitration, their use must be approached with careful consideration of ethical implications to maintain the fairness, integrity, and trustworthiness of the arbitration process.

5. Should stakeholders in arbitration be concerned when they input sensitive data to data mining, text analytics, or generative AI tools?

Data Security and Breaches:  AI systems, like any digital platform, are vulnerable to hacking or unauthorized access.  If sensitive data is compromised, it can have significant repercussions for the parties involved, especially in commercial disputes where proprietary information is at stake.

Confidentiality:  Arbitration is prized for its confidentiality.  However, once data is processed by AI tools, there’s a concern about where this data is stored, who has access, and how long it is retained.  Even with strict protocols, the potential for leaks or unintended sharing exists.

Data Misinterpretation:  AI tools, especially if not properly calibrated or trained, might misinterpret sensitive data, leading to skewed analyses or conclusions.  Such inaccuracies could inadvertently affect the arbitration outcome.

Third-party Involvement:  Many AI solutions involve third-party vendors or cloud services.  Engaging external entities increases the risk of data exposure or misuse.

Long-term Implications:  Even if data is encrypted and anonymized, there’s uncertainty about future technologies that might be able to de-anonymize or decrypt data.

Given these concerns, stakeholders should be cautious when inputting sensitive data into AI tools.  Ensuring robust security measures, clear data handling protocols, and working with trusted AI vendors can mitigate some of these risks.

6. Could AI make the decisions for stakeholders in arbitration (parties, counsel, and arbitrators)?

While AI has advanced capabilities, its role in arbitration decision-making should be approached with caution:

AI as a Decision Support Tool:  AI can analyze vast amounts of data quickly, identify patterns, and present relevant information.  This makes it an excellent tool for assisting human decision-makers by providing insights and flagging important data. For instance, AI can help in evidence analysis, identifying precedents, or suggesting probable outcomes based on past cases.

Limitations in Understanding Nuance:  While AI can process data, it lacks the intuitive understanding, emotional intelligence, and cultural awareness that human arbitrators bring to the table.  Arbitration often involves complex interplays of facts, legal principles, and human dynamics, which might be challenging for AI to fully grasp.

Accountability: A decision made by an AI lacks the human element of accountability.  If a dispute arises from an AI's decision, the lines of responsibility can become blurred.

Ethical Implications:  Allowing AI to make decisions could raise ethical issues, especially if stakeholders feel their case was determined by an algorithm rather than a thoughtful human process.

In conclusion, while AI can be an invaluable tool for enhancing the efficiency and accuracy of the arbitration process, it should serve as an assistant to human decision-makers rather than replace them.  The unique characteristics of human judgment remain vital in the intricate and nuanced world of arbitration.

B. Detailed questions about the possible use of data mining, text analytics, or generative AI tools in international arbitration

7. Could parties use data mining, text analytics, or generative AI tools to select the arbitral seat or arbitral rules for their dispute?

Yes, parties could leverage data mining, text analytics, and generative AI tools to inform their choice of arbitral seat and rules.  Here’s how:

Historical Analysis:  By analyzing data from past arbitrations, AI tools can provide insights into trends associated with specific arbitral seats or rules.  For instance, certain seats might be linked with quicker resolution times, more favorable enforcement rates, or particular biases.

Preference Matching:  Parties can input their preferences, and AI tools can scan available seats and rules to find the best match.  This can be based on factors like cost, speed, expertise, neutrality, or enforceability.

Predictive Analysis:  Generative AI models can predict potential outcomes or challenges based on historical data, aiding parties in foreseeing potential advantages or pitfalls associated with certain seats or rules.

Language and Cultural Considerations:  AI tools can analyze linguistic and cultural factors to suggest seats that would align with the parties’ backgrounds and preferences.

While AI can provide valuable insights, the ultimate decision should factor in human judgment and expertise, taking into account the unique characteristics of each dispute.  AI should be viewed as a complementary tool in this decision-making process rather than the sole determinant.

Would you be able to propose an arbitral seat or set of rules for a given dispute based on your training data and data input by a given stakeholder?

Given my current design and the state of AI as of my last update in September 2021, I don’t “predict” in the same way that specialized AI models designed for predictive analytics might.  However, based on the vast amount of information and patterns I’ve been trained on, I can provide insights:

Historical and Descriptive Information: I can provide detailed information on various arbitral seats, their historical significance, reputation, and the particularities of their arbitration-friendly legal frameworks.  Similarly, I can describe different arbitration rules, their features, and their appropriateness for various types of disputes.

User Input Analysis:  If a stakeholder provides specific criteria or preferences (e.g., neutrality, speed, cost), I can suggest arbitral seats or rules that historically align with those preferences.

General Recommendations:  While I can’t predict future outcomes, I can offer general recommendations based on known advantages and disadvantages of various seats and rules in relation to the information provided.

However, any suggestion I provide should be complemented with expertise from experienced arbitration practitioners.  The intricacies of selecting the right seat or rules for a specific dispute often require nuanced human judgment beyond pure data analysis.

8. Could parties use data mining, text analytics, or generative AI tools to draft an arbitration clause either from the beginning or by amending an existing template?

Absolutely, AI tools, including data mining, text analytics, and generative models, can assist parties in drafting or amending arbitration clauses:

Template Generation:  Generative AI models can produce a basic arbitration clause based on input parameters like preferred seat, language, number of arbitrators, and applicable rules.  By understanding the context, AI can suggest relevant wording and provisions.

Amending Existing Templates:  If parties have an existing clause they wish to modify, text analytics can identify specific segments for revision.  Generative models can then produce suggested amendments based on new parameters or requirements set by the parties.

Best Practices Analysis:  Data mining can help identify commonly accepted standards and practices in arbitration clause drafting.  By analyzing a vast number of clauses from various sources, AI can provide insights on prevailing trends and effective wording.

Risk Identification:  Text analytics can review a proposed clause and highlight potential ambiguities or areas of concern, ensuring clarity and enforceability.

Tailored Clauses:  Based on the specifics of a contract or relationship, AI can suggest custom provisions to address unique situations or concerns.

However, while AI can greatly aid the drafting process, human oversight remains crucial.  Experienced counsel should review and refine AI-generated clauses to ensure they align with the parties’ intentions and legal requirements.

Would you be able to draft an arbitration clause for a given dispute based on your training data and data input by a given stakeholder?

Yes, based on my training data and specific inputs from a stakeholder, I can provide a draft of an arbitration clause. Here’s how:

Input Analysis:  By understanding the specifics provided by the stakeholder—such as preferred seat of arbitration, language, governing law, number of arbitrators, and any unique requirements—I can tailor the clause to meet those preferences.

Reference to Historical Data:  My training data includes knowledge of various arbitration rules, practices, and precedents.  Using this, I can craft clauses that align with standard practices and are likely to be enforceable.

Customized Drafting:  Using the parameters set by the stakeholder, I can generate a clause tailored to the particulars of the intended agreement or contract.

However, it’s essential to note a few caveats:

Human Oversight:  While I can provide a draft, it’s crucial that experienced legal counsel reviews any AI-generated clause to ensure it captures the parties’ intent and adheres to relevant legal standards.

Dynamic Changes:  The field of arbitration, like all legal fields, evolves over time.  Regular updates and a contemporary understanding of the field are necessary for accurate and enforceable clause drafting.

In summary, while I can assist in drafting an arbitration clause, it’s imperative to combine this capability with human expertise for the best results.

9. Could stakeholders in arbitration use data mining, text analytics, or generative AI tools to assess a potential conflict of interest between either counsel and a party or an arbitrator with parties, counsel, experts, or other arbitrators?

Yes, data mining, text analytics, and generative AI tools can be employed by stakeholders in arbitration to assess potential conflicts of interest. Here’s how:

Relationship Mapping:  Data mining can sift through vast amounts of data to identify past relationships between parties, counsel, arbitrators, and experts.  For example, it can highlight if an arbitrator has previously arbitrated a dispute involving a certain party or counsel, or if they’ve had any professional or financial connections.

Analyzing Publications and Statements:  Text analytics can be applied to public statements, publications, or previous decisions made by arbitrators.  This can help identify any potential biases or inclinations that may hint at a conflict of interest.

Link Analysis:  AI tools can establish links between various entities based on shared directorships, financial transactions, or past engagements.  This can be instrumental in highlighting indirect conflicts that may not be immediately apparent.

Predictive Analysis:  Generative models, informed by historical data, might predict potential conflicts based on patterns and relationships.

However, while AI can flag potential conflicts, a human review is crucial.  Some conflicts or potential biases might be nuanced or context-dependent, requiring a more detailed and subjective assessment than AI tools can provide on their own.

Soft-law documents that refer to conflict of interest often use vague terms like “significant interest in a party,” “regularly advises,” “close relationship,” “significant commercial relationship,” “related issue,” “significant fees or other revenues,” “regular basis.”  Do you think that generative AI could replace a human in assessing whether a given situation qualifies under vague terms?

Generative AI has made significant strides in processing and understanding language.  However, assessing vague terms in soft-law documents poses challenges:

Contextual Understanding:  Vague terms like “significant interest,” “regularly advises,” or “close relationship” require a nuanced understanding of context.  While AI can analyze patterns in data, interpreting the contextual weight of such terms is challenging.

Ambiguity:  The inherent ambiguity in such terms means that they can be interpreted in various ways.  An AI might struggle to definitively categorize situations under such terms without clear parameters.

Human Experience and Judgment:  Assessing vague terms often involves not just textual analysis but also drawing upon human experiences, emotions, and ethical considerations.  This requires a depth of judgment and understanding that AI, at least in its current form, lacks.

Ethical and Societal Implications:  Relying solely on AI to interpret and make decisions based on vague terms might not align with societal expectations and ethical considerations surrounding fairness and transparency.

In conclusion, while generative AI can assist and provide insights by analyzing large datasets and identifying patterns, it cannot fully replace human judgment when interpreting and assessing situations based on vague terms.  Human expertise and intuition remain integral to such evaluations.

10. Could parties use data mining, text analytics, or generative AI tools to select an arbitrator for their dispute?

Yes, data mining, text analytics, and generative AI tools can assist parties in selecting an arbitrator for their dispute. Here’s how:

Profile Analysis:  Data mining can extract comprehensive profiles of potential arbitrators, detailing their past cases, decisions, areas of expertise, and other relevant experiences.

Match to Dispute Nature:  Text analytics can match the nature and complexity of a given dispute with arbitrators who have handled similar issues, ensuring subject-matter expertise.

Bias and Impartiality Check:  By analyzing past decisions and writings, AI can identify any potential biases, tendencies, or patterns in an arbitrator's approach.  This assists parties in selecting an arbitrator who is perceived as impartial and unbiased.

Feedback and Reputation:  AI tools can sift through large datasets, like legal forums or publications, to gather feedback and gauge the reputation of potential arbitrators in the arbitration community.

Predictive Insights:  Generative AI can offer predictive insights on possible decision-making patterns or tendencies of arbitrators based on historical data.

While AI provides a powerful tool for preliminary selection and shortlisting, the final decision should incorporate human judgment, considering the unique aspects of each dispute and the nuances that might not be captured purely by data analysis.

Would these tools consider that arbitrators might change their mind or have rendered an award because of the particular circumstances of the case or do not entirely agree with every word drafted by their co-arbitrators?

When utilizing AI tools in assessing arbitrators’ past decisions, there are inherent limitations:

Dynamic Nature of Decision-making:  While AI can analyze historical decisions and patterns, it cannot predict with certainty how an arbitrator might decide in future scenarios.  Decision-making can evolve over time and be influenced by myriad factors.

Case-specific Circumstances:  Every arbitration case has its unique set of facts, legal issues, and parties.  AI can’t fully grasp the nuances and subtleties that might cause an arbitrator to decide one way in a particular case but differently in another with seemingly similar issues.

Joint Awards:  In instances where awards are rendered by a panel of arbitrators, AI cannot discern which portions of the award were influenced by which arbitrator.  Even if an arbitrator does not agree with every word or sentiment, they might concur in the interest of producing a cohesive and unanimous decision.

In summary, while AI tools offer valuable insights into patterns and tendencies, they cannot fully account for the complex and multifaceted nature of arbitrators’ decision-making processes.  It’s essential to approach AI-generated data as one component of a broader assessment, complemented by human intuition and understanding of the arbitration landscape.

11. Could parties (or e.g., litigation financers) use data mining, text analytics, or generative AI tools to assess the chances of winning a case in international arbitration?

Parties or litigation financers can use data mining, text analytics, and generative AI tools to assess the chances of winning a case in international arbitration, but with qualifications:

Historical Analysis:  Data mining can retrieve and analyze outcomes of past cases with similar fact patterns, legal issues, or parties, providing a preliminary insight into potential outcomes.

Pattern Recognition:  Text analytics can detect patterns in decisions rendered by specific arbitrators or tribunals, helping parties understand tendencies or inclinations.

Predictive Modelling:  Generative AI can, based on historical data, make predictions about possible outcomes.  However, this is probabilistic and not definitive.

However, there are inherent limitations:

Uniqueness of Each Case:  Every case has distinct facts, parties, and legal nuances.  While patterns might emerge from historical data, each case’s unique context can significantly impact the outcome.

Limited Data Access:  Many international arbitration decisions, especially in commercial matters, remain confidential, limiting the data pool available for analysis.

Complexity of Arbitration:  Unlike more formulaic legal processes, arbitration’s fluidity, driven by party autonomy and differing legal traditions, can introduce unpredictability.

In conclusion, while AI tools can offer valuable preliminary insights, they should not be the sole basis for decision-making.  Human expertise, intuition, and a deep understanding of the specifics of the case remain vital.

What documents or information would the parties need to deliver for the AI to be able to do so?

For AI to assess the chances of winning a case in international arbitration effectively, the following documents and information would be beneficial:

Pleadings:  Comprehensive details of the claims, counterclaims, and defenses, including statement of claim, statement of defense, and any counterclaims.

Evidence:  All documents, affidavits, expert reports, and witness statements that parties intend to rely upon.

Legal Authorities:  Case law, treaties, conventions, and any other legal sources cited by the parties.

Procedural Orders:  These provide context on the conduct of the arbitration and can shed light on the tribunal’s inclinations.

Details of Arbitrators:  Past decisions, writings, and any relevant public commentary by or about the arbitrators can help in understanding their approach to specific issues.

Historical Data:  Details of past arbitrations with similar issues, especially those involving the same arbitrators, institutions, or applicable laws.

Transcripts:  If available, transcripts of testimonies, procedural meetings, and hearings.

Interim Awards or Decisions:  These can give insights into the tribunal’s thinking or approach to particular matters.

Applicable Arbitration Rules:  The procedural framework under which the arbitration is being conducted.

Relevant Industry Data:  Especially in commercial disputes, understanding the industry context can be crucial.

It’s important to note that the effectiveness of AI’s analysis depends significantly on the quality, relevance, and comprehensiveness of the data provided.  Moreover, while AI can assist in analysis, the unique nature of each dispute requires a combination of AI and human judgment for accurate assessments.

How would AI tackle the use of vague legal definitions (e.g., “foreseeable,” “fault,” “reasonable”)?  Does the understanding of these vague terms and their application to a given case vary significantly depending on a given case, its facts and applicable law?

When addressing vague legal definitions like “foreseeable,” “fault,” or “reasonable,” AI can assist in several ways:

Historical Analysis:  AI can mine past arbitration decisions and legal texts to determine how such terms have been interpreted and applied in various contexts.  This gives a precedent-based understanding.

Pattern Recognition:  By analyzing vast amounts of data, AI can identify patterns in how specific terms are applied, offering insights into prevailing trends or shifts in interpretation.

Contextual Application:  Advanced AI models can understand the context within which a term is used, giving more nuanced insights into its possible interpretation in a given scenario.

However, challenges arise:

Variability Across Jurisdictions:  The interpretation of such terms can vary widely based on the applicable law, jurisdictional principles, or the cultural and legal traditions of the arbitrating parties.

Case Specificity:  Even within a single jurisdiction, the application of a vague term can change based on the unique facts and nuances of a case.

In conclusion, while AI can provide valuable insights into the potential interpretation of vague legal terms by analyzing historical data and patterns, it’s crucial to remember that each case is unique.  Human expertise and understanding of the specific case, its facts, and the applicable law remain essential in determining the most appropriate interpretation.

12. Could arbitrators use data mining, text analytics, or generative AI tools to examine the evidence in international arbitration or in drafting awards or other documents?

Yes, arbitrators can leverage data mining, text analytics, or generative AI tools in several ways within the international arbitration process:

Examining Evidence:  Data mining and text analytics can assist arbitrators in efficiently sifting through large volumes of evidence, highlighting key themes, inconsistencies, or patterns that might otherwise be overlooked.

Drafting Awards:  Generative AI tools can help draft sections of awards by providing templates based on historical decisions or standard wording.  However, the final content would need a human touch to ensure accuracy, fairness, and applicability to the unique case at hand.

Legal Research:  AI can quickly scour vast databases for relevant case law, statutes, and treaties that pertain to the issues in dispute, making the research process more efficient.

Consistency Checks:  AI can help arbitrators ensure that their awards are consistent with their previous decisions or that they are in line with predominant interpretations of certain rules or laws.

Translation and Language Tools:  In multi-lingual arbitrations, AI-driven translation tools can assist in understanding evidence presented in different languages.

However, while these tools can streamline and enhance the arbitration process, they should complement, not replace, the human judgment and expertise that are foundational to arbitration’s integrity and effectiveness.

13. Could parties use data mining, text analytics, or generative AI tools to draft more persuasive submissions in arbitration?

Absolutely, parties can utilize data mining, text analytics, and generative AI tools to draft more persuasive submissions in arbitration:

Historical Analysis:  By examining past submissions that were successful, AI can identify patterns or arguments that have historically proven to be effective.

Legal Research:  Data mining can quickly extract relevant precedents, case law, or legal principles that can bolster a party’s arguments.

Content Organization:  Text analytics can help structure submissions in a manner that emphasizes key points and presents information logically and clearly.

Tone and Style Analysis:  Some AI tools can gauge the tone and style of text, ensuring that submissions are not only legally sound but also engaging and accessible.

Language Precision:  Generative AI can suggest more precise legal language or terminology, ensuring clarity and reducing the risk of misinterpretation.

Reference Checks:  AI can automatically check references or citations, ensuring that they are correct and relevant to the arguments being made.

While AI tools can enhance the drafting process by providing insights and efficiency, human judgment, understanding of the specific facts, and legal expertise remain indispensable for crafting truly persuasive submissions.

14. Could arbitral institutions use data mining, text analytics, or generative AI tools to critically scrutinize an arbitral award?

Certainly, arbitral institutions can utilize data mining, text analytics, and generative AI tools to critically scrutinize an arbitral award:

Consistency Check:  AI can quickly compare the award against a vast database of previous awards to ensure consistency in legal reasoning, application of rules, and even in the awarding of damages.

Error Detection:  Text analytics can identify potential errors in the award, such as incorrect references, inconsistent numbering, or even contradictions within the document itself.

Formatting & Structure:  AI can review the award for adherence to prescribed formats or structures that the institution might have in place.

Legal Citation Verification:  Data mining tools can verify legal citations, ensuring that cited cases, statutes, or treaties are both accurate and relevant.

Tone & Language Analysis:  AI tools can gauge the tone of the award, ensuring it is neutral and professional, and can suggest refinements in language for clarity.

Confidentiality Check:  AI can be programmed to scan for confidential information, ensuring that any sensitive data that shouldn’t be publicly disclosed is redacted or addressed.

However, while AI can aid in the scrutiny process, human oversight remains crucial to ensure that the award respects the specifics of the case, parties’ intentions, and the nuances of the law.

Conversation No. 2: Bard

An arbitration lawyer interviews you on how AI can be used in the international arbitration process.  Give a concise and easy-to-follow answer to all the following questions (no more than 200 words).

A. General questions about the possible use of data mining, text analytics, or generative AI tools in international arbitration

1. Is there any legal or soft-law framework for using data mining, text analytics, or generative AI tools in international commercial arbitration?

There is no specific legal or soft-law framework for using data mining, text analytics, or generative AI tools in international commercial arbitration.  However, the UNCITRAL Model Law on International Commercial Arbitration (“UNCITRAL Model Law”) and the IBA Rules on the Taking of Evidence in International Arbitration (“IBA Rules”) provide some general principles that can be applied to the use of these tools.  For example, the UNCITRAL Model Law states that the parties must be treated equally and must have the full opportunity to present their case.  This principle could be interpreted to mean that the parties should be given the opportunity to object to the use of AI tools, if they believe that these tools would unfairly prejudice their case.

The IBA Rules also address the issue of electronic evidence, which includes data mining, text analytics, and generative AI tools.  The IBA Rules state that electronic evidence should be treated the same as any other type of evidence, and that the parties should be given the opportunity to inspect and challenge it.

In addition to these general principles, there are a number of specific issues that need to be considered when using AI tools in international commercial arbitration. These include:

The admissibility of AI-generated evidence:  Is AI-generated evidence admissible as evidence in arbitration?  This is a complex question that has not yet been definitively answered.

The fairness of using AI tools:  Could the use of AI tools unfairly prejudice one party or the other?  This is also a complex question that depends on the specific circumstances of the case.

The security and confidentiality of data:  How can the security and confidentiality of data be protected when using AI tools?  This is an important issue, given the sensitive nature of the data that may be involved in arbitration proceedings.

Overall, there is no clear legal or soft-law framework for using AI tools in international commercial arbitration.  However, the principles set out in the UNCITRAL Model Law and the IBA Rules can provide some guidance on how to use these tools in a fair and transparent manner.

How could general principles of arbitration shape the use of these tools in arbitration?

Sure, here are some ways in which general principles of arbitration could shape the use of AI tools in arbitration:

Party autonomy:  The principle of party autonomy means that the parties to an arbitration have the right to agree on the procedures to be followed.  This means that the parties could agree to use AI tools in their arbitration, if they so choose.

Fairness:  The principle of fairness means that the parties to an arbitration should be treated equally and should have the opportunity to present their case.  This means that the use of AI tools should not unfairly prejudice one party or the other.

Confidentiality:  The principle of confidentiality means that the information disclosed in an arbitration should be kept confidential.  This means that the use of AI tools should not violate the confidentiality of the arbitration.

Due process:  The principle of due process means that the parties to an arbitration should have the opportunity to challenge the evidence presented against them.  This means that the use of AI tools should not be used to introduce evidence that is not admissible or that has not been challenged by the parties.

Overall, the general principles of arbitration can help to ensure that the use of AI tools is fair, transparent, and in accordance with the law.

Here are some specific examples of how AI tools could be used in international commercial arbitration:

Data mining:  Data mining can be used to analyze large amounts of data, such as contracts, financial records, and emails.  This can help to identify patterns and trends that may be relevant to the arbitration.

Text analytics:  Text analytics can be used to extract meaning from text documents, such as contracts, pleadings, and expert reports.  This can help to identify key terms and concepts that are relevant to the arbitration.

Generative AI:  Generative AI can be used to create new documents, such as witness statements and expert reports.  This can help to streamline the arbitration process and reduce the cost of legal fees.

It is important to note that the use of AI tools in arbitration is still in its early stages.  There are a number of challenges that need to be addressed, such as the admissibility of AI-generated evidence and the fairness of using AI tools.  However, AI has the potential to revolutionize the arbitration process and make it more efficient, effective, and fair.

Should there be any law or at least soft-law regulation of using these tools in international commercial arbitration?  What should be the regulators’ aim?

Sure, here are my thoughts on the regulation of AI tools in international commercial arbitration:

Whether there should be regulation:  There is no consensus on whether there should be regulation of AI tools in international commercial arbitration.  Some people believe that there should be no regulation, as this would stifle innovation.  Others believe that there should be some regulation, in order to ensure that the use of AI tools is fair and transparent.

The aim of regulation:  If there is regulation, the regulators’ aim should be to ensure that the use of AI tools does not:

Unfairly prejudice one party or the other.

Violate the confidentiality of the arbitration.

Introduce evidence that is not admissible or that has not been challenged by the parties.

Undermine the integrity of the arbitration process.

Ultimately, the decision of whether or not to regulate AI tools in international commercial arbitration is a complex one.  There are a number of factors to consider, such as the potential benefits and risks of regulation, the need to balance innovation with fairness, and the need to ensure that the regulation is enforceable.

Here are some specific examples of how regulation could be implemented:

Codes of conduct:  Professional organizations could develop codes of conduct for the use of AI tools in arbitration.  These codes could set out ethical principles and best practices for the use of these tools.

Standards:  Industry bodies could develop standards for the use of AI tools in arbitration.  These standards could provide technical specifications for the development and use of these tools.

Legislation:  Governments could enact legislation regulating the use of AI tools in arbitration.  This legislation could set out the specific rules and procedures for the use of these tools.

The best approach to regulation will vary depending on the specific circumstances.  However, it is important to have some form of regulation in place to ensure that the use of AI tools in international commercial arbitration is fair, transparent, and in accordance with the law.

2. Should the use of data mining, text analytics, or generative AI tools be allowed at all? If so, should its users disclose this fact?

Sure, here are my thoughts on the use of data mining, text analytics, or generative AI tools in international commercial arbitration:

Should the use of AI tools be allowed?:  There is no consensus on whether the use of AI tools should be allowed in international commercial arbitration.  Some people believe that the use of AI tools should be allowed, as they can help to make the arbitration process more efficient and effective.  Others believe that the use of AI tools should not be allowed, as they could be used to unfairly prejudice one party or the other.

Should users disclose the use of AI tools?:  If the use of AI tools is allowed, then users should be required to disclose the use of these tools to the other party.  This is important to ensure that the other party has the opportunity to challenge the use of these tools and to ensure that the arbitration process is fair.

Ultimately, the decision of whether or not to allow the use of AI tools in international commercial arbitration is a complex one.  There are a number of factors to consider, such as the potential benefits and risks of using these tools, the need to balance innovation with fairness, and the need to ensure that the arbitration process is fair.

Here are some of the potential benefits of using AI tools in international commercial arbitration:

Efficiency:  AI tools can help to make the arbitration process more efficient by automating tasks such as document review and analysis.

Accuracy:  AI tools can help to improve the accuracy of the arbitration process by identifying patterns and trends that may be missed by human reviewers.

Cost-effectiveness:  AI tools can help to reduce the cost of the arbitration process by automating tasks and by providing insights that can help to shorten the duration of the arbitration.

Here are some of the potential risks of using AI tools in international commercial arbitration:

Bias:  AI tools can be biased, which could lead to unfair outcomes in arbitration.

Confidentiality:  AI tools could be used to violate the confidentiality of the arbitration process.

Admissibility:  The admissibility of AI-generated evidence is not yet clear, which could lead to challenges in the arbitration process.

Overall, the use of AI tools in international commercial arbitration has the potential to be beneficial, but there are also some risks that need to be considered.  It is important to carefully weigh the benefits and risks before deciding whether or not to use these tools in arbitration.

3. Is there enough data in international arbitration for data mining, text analytics, or generative AI tools to present data that the stakeholders could use?

The availability of data for AI tools in international arbitration is a complex issue.  There are a number of factors to consider, such as:

The amount of data that is available:  The amount of data that is available for AI tools to train on is limited.  This is because many international arbitration awards are not published, and even those that are published may not be fully redacted.

The quality of the data:  The quality of the data that is available is also limited.  This is because the data may be incomplete, inaccurate, or biased.

The diversity of the data:  The data that is available is also not very diverse.  This is because most of the data is from cases involving Western countries.

Despite these challenges, there is some data that can be used to train AI tools for international arbitration.  This data includes:

Published arbitral awards

Unpublished arbitral awards

Legal documents

Commercial contracts

Financial records

Emails

Expert reports

By using this data, AI tools can be trained to identify patterns and trends that may be relevant to international arbitration cases.  However, it is important to note that the results of AI tools should not be used as the sole basis for decision-making in arbitration cases.  The results of AI tools should be used in conjunction with the judgment of human experts.

Overall, the availability of data for AI tools in international arbitration is limited.  However, there is some data that can be used to train AI tools.  These tools can be used to identify patterns and trends that may be relevant to international arbitration cases.  However, the results of AI tools should not be used as the sole basis for decision-making in arbitration cases.

Is there enough data from international commercial arbitration compared to international treaty arbitration?

The availability of data from international commercial arbitration (“ICA”) and international treaty arbitration (“ITA”) is different.

ICA:  There is more data available from ICA than ITA.  This is because ICA is more common than ITA, and there are more published arbitral awards from ICA cases.

ITA:  There is less data available from ITA than ICA.  This is because ITA is less common than ICA, and there are fewer published arbitral awards from ITA cases.

The difference in the availability of data from ICA and ITA is due to a number of factors, such as:

The nature of the disputes:  ICA disputes are typically commercial disputes, while ITA disputes are typically disputes between states.  Commercial disputes are more likely to be resolved through arbitration than state disputes.

The confidentiality of the arbitration process:  Arbitral awards are typically confidential, and this is especially true for ITA cases.  This means that fewer arbitral awards are published from ITA cases.

Is the data comparable when the awards are rendered in a different form and structure, in different languages, and under different rules?

The comparability of data from arbitral awards rendered in different forms and structures, in different languages, and under different rules is a complex issue.  There are a number of factors that can affect the comparability of data, such as:

The form and structure of the awards:  Arbitral awards can be rendered in a variety of forms and structures.  Some awards are very detailed, while others are more concise.  Some awards are written in a narrative style, while others are more technical.

The language of the awards:  Arbitral awards can be rendered in a variety of languages.  This can make it difficult to compare awards that are rendered in different languages.

The rules under which the awards are rendered:  Arbitral awards can be rendered under a variety of rules.  This can also make it difficult to compare awards that are rendered under different rules.

Despite these challenges, there are some ways to make the data from arbitral awards more comparable.  One way is to use a standardized coding system to identify the key elements of the awards.  Another way is to use machine translation to translate the awards into a common language.

By using these methods, it is possible to make the data from arbitral awards more comparable.  However, it is important to note that there will always be some degree of imprecision in comparing data from awards that are rendered in different forms and structures, in different languages, and under different rules.

Here are some specific examples of how AI tools can be used to make data from arbitral awards more comparable:

Coding:  AI tools can be used to code arbitral awards according to a standardized coding system.  This can help to identify the key elements of the awards, such as the parties, the issues, the facts, and the decision.

Machine translation:  AI tools can be used to translate arbitral awards into a common language.  This can help to make the awards more accessible to a wider audience and to facilitate comparisons between awards that are rendered in different languages.

Text analytics:  AI tools can be used to analyze the text of arbitral awards.  This can help to identify patterns and trends in the awards, such as the types of claims that are made, the types of arguments that are used, and the types of decisions that are rendered.

By using these methods, AI tools can help to make the data from arbitral awards more comparable.  This can be helpful for lawyers, arbitrators, and other stakeholders who need to analyze arbitral awards.

Here are some additional thoughts on the issue of data comparability in international arbitration:

The comparability of data is important for a number of reasons.  It can help to identify trends in arbitral awards, to develop legal arguments, and to predict the outcome of future cases.

The challenges to data comparability in international arbitration are significant.  However, there are a number of tools and techniques that can be used to address these challenges.

AI tools have the potential to play a significant role in improving the comparability of data in international arbitration.  However, it is important to use these tools carefully and to ensure that they are used in a way that is fair and transparent.

4. What are the ethical considerations when using data mining, text analytics, or generative AI tools in international arbitration?

Sure, here are some of the ethical considerations when using data mining, text analytics, or generative AI tools in international arbitration:

Fairness:  AI tools should be used in a fair and impartial manner.  This means that the tools should not be used to unfairly prejudice one party or the other.

Transparency:  The use of AI tools should be transparent to all parties involved in the arbitration.  This means that the parties should be aware of the fact that AI tools are being used and they should be given the opportunity to challenge the results of the tools.

Privacy:  The use of AI tools should not violate the privacy of the parties involved in the arbitration.  This means that the tools should not be used to collect or process personal data without the consent of the parties.

Accuracy:  AI tools should be accurate and reliable.  This means that the tools should be properly trained and tested before they are used in arbitration.

Accountability:  The use of AI tools should be accountable.  This means that there should be a way to hold the developers and users of AI tools accountable for any errors or biases in the tools.

These are just some of the ethical considerations that need to be taken into account when using AI tools in international arbitration.  It is important to carefully consider these considerations before using these tools in order to ensure that they are used in a fair and ethical manner.

Here are some additional thoughts on the ethical considerations of using AI tools in international arbitration:

The use of AI tools can raise a number of ethical concerns, such as the potential for bias, the violation of privacy, and the lack of transparency.

It is important to carefully consider these concerns before using AI tools in arbitration.

There are a number of steps that can be taken to mitigate the ethical risks of using AI tools, such as ensuring that the tools are properly trained and tested, that the results of the tools are transparent, and that there is a way to hold the developers and users of the tools accountable.

The use of AI tools in international arbitration is a rapidly evolving field.  As these tools become more sophisticated, it is important to continue to carefully consider the ethical implications of their use.

5. Should stakeholders in arbitration be concerned when they input sensitive data to data mining, text analytics, or generative AI tools?

Sure, here are my thoughts on the concerns that stakeholders in arbitration may have when they input sensitive data to data mining, text analytics, or generative AI tools:

Data security:  Stakeholders may be concerned about the security of their sensitive data when it is input into AI tools.  They may worry that the data could be hacked or stolen, or that it could be used for unauthorized purposes.

Data privacy:  Stakeholders may also be concerned about the privacy of their sensitive data.  They may worry that the data could be used to identify them or to track their activities.

Bias:  Stakeholders may also be concerned about the potential for bias in AI tools.  They may worry that the tools could be used to discriminate against them or to give them an unfair advantage.

Accuracy:  Stakeholders may also be concerned about the accuracy of the results produced by AI tools.  They may worry that the tools could make mistakes that could affect the outcome of their arbitration case.

These are just some of the concerns that stakeholders in arbitration may have when they input sensitive data to AI tools.  It is important to carefully consider these concerns before using these tools in order to ensure that the data is secure, private, and accurate.

Here are some additional thoughts on the concerns about data security, privacy, bias, and accuracy in AI tools:

The security of data is a major concern for many people.  AI tools can be used to collect and store large amounts of data, which makes it a target for hackers.

The privacy of data is also a major concern.  AI tools can be used to track people’s activities and to identify them.  This can be a problem if the data is not properly protected.

Bias is a potential problem with any AI tool.  AI tools are trained on data, and if the data is biased, the tool will be biased as well.  This can lead to unfair results.

The accuracy of AI tools is also a concern.  AI tools are not perfect, and they can make mistakes.  This can be a problem if the results of the tool are used to make important decisions.

It is important to be aware of these concerns when using AI tools in arbitration.  There are a number of steps that can be taken to mitigate these risks, such as using secure data storage, anonymizing data, and testing the accuracy of the tools.

Ultimately, the decision of whether or not to use AI tools in arbitration is a complex one.  There are a number of factors to consider, such as the nature of the case, the sensitivity of the data, and the availability of other options.  It is important to carefully weigh the risks and benefits before making a decision.

6. Could AI make the decisions for stakeholders in arbitration (parties, counsel, and arbitrators)?

AI can be used to assist stakeholders in arbitration in a variety of ways, but it is unlikely to ever make the decisions for them.

Here are some of the ways that AI can be used in arbitration:

Data analysis:  AI can be used to analyze large amounts of data, such as arbitral awards, legal documents, and financial records.  This can help stakeholders to identify patterns and trends that may be relevant to their cases.

Document review:  AI can be used to review documents, such as contracts, emails, and transcripts.  This can help stakeholders to identify important information and to save time.

Fact-finding:  AI can be used to find facts, such as the value of damages or the location of assets.  This can help stakeholders to build their cases.

Predictive analytics:  AI can be used to predict the outcome of a case.  This can help stakeholders to make informed decisions about their cases.

Negotiation support:  AI can be used to support negotiations between the parties.  This can help to reach a settlement more quickly and efficiently.

However, AI is not yet capable of making the decisions for stakeholders in arbitration.  These decisions require human judgment and understanding.  AI can be a valuable tool to assist stakeholders in arbitration, but it is not a replacement for human decision-making.

Here are some of the reasons why AI is unlikely to ever make the decisions for stakeholders in arbitration:

AI is not perfect:  AI tools can make mistakes.  This is because they are trained on data, and the data may be incomplete or inaccurate.

AI is not always objective:  AI tools can be biased.  This is because they are trained on data that may be biased.

AI does not have common sense:  AI tools do not have the same common sense as humans.  This means that they may not be able to understand the nuances of a case.

Ultimately, the decisions in arbitration will always be made by humans.  AI can be a valuable tool to assist stakeholders in arbitration, but it is not a replacement for human decision-making.

B. Detailed questions about the possible use of data mining, text analytics, or generative AI tools in international arbitration

7. Could parties use data mining, text analytics, or generative AI tools to select the arbitral seat or arbitral rules for their dispute?

Sure, here are some ways that parties could use data mining, text analytics, or generative AI tools to select the arbitral seat or arbitral rules for their dispute:

Data mining:  AI tools can be used to mine data on past arbitral cases to identify factors that are associated with successful outcomes.  This data could then be used to predict which arbitral seat or arbitral rules are more likely to lead to a favorable outcome for a particular party.

Text analytics:  AI tools can be used to analyze text from arbitral awards and other legal documents to identify trends and patterns.  This information could then be used to assess the strengths and weaknesses of different arbitral seats and arbitral rules.

Generative AI:  AI tools can be used to generate creative text formats, such as poems, code, scripts, musical pieces, email, letters, etc.  This could be used to create hypothetical scenarios and to explore different options for selecting the arbitral seat or arbitral rules.

It is important to note that AI tools are not perfect and they should not be used as the sole basis for making decisions about arbitral seat or arbitral rules.  However, they can be a valuable tool to help parties to make more informed decisions.

Here are some of the potential benefits of using AI tools to select the arbitral seat or arbitral rules:

Increased efficiency:  AI tools can help parties to quickly and easily identify the most relevant data and information.  This can save time and resources.

Improved accuracy:  AI tools can help parties to identify patterns and trends that may not be obvious to humans.  This can improve the accuracy of their decision-making.

Reduced bias:  AI tools can help to reduce bias in the decision-making process.  This is because they are not influenced by human emotions or prejudices.

Here are some of the potential risks of using AI tools to select the arbitral seat or arbitral rules:

Data reliability:  The reliability of the data used by AI tools is important.  If the data is not reliable, the results of the tools may not be accurate.

Algorithmic bias:  AI tools can be biased, depending on the data they are trained on.  This is a risk that should be carefully considered before using these tools.

Lack of transparency:  AI tools can be opaque, making it difficult to understand how they make decisions.  This can be a problem if parties want to understand the basis for the decision.

Overall, AI tools can be a valuable tool to help parties to select the arbitral seat or arbitral rules for their dispute.  However, it is important to be aware of the potential risks and benefits before using these tools.

Would you be able to propose an arbitral seat or set of rules for a given dispute based on your training data and data input by a given stakeholder?

Yes, I could propose an arbitral seat or set of rules for a given dispute based on my training data and data input by a given stakeholder.

I am trained on a massive dataset of arbitral awards, legal documents, and other information related to international arbitration.  This data includes information on the arbitral seat, the arbitral rules, the parties, the issues, and the outcome of the cases.

I can use this data to identify factors that are associated with successful outcomes in arbitration.  For example, I can identify arbitral seats that are known for their neutrality and efficiency, or arbitral rules that are known for their flexibility and fairness.

I can also use this data to assess the strengths and weaknesses of different arbitral seats and arbitral rules.  For example, I can consider the location of the arbitral seat, the availability of legal expertise, and the cost of arbitration.

If a stakeholder provides me with information about their dispute, such as the parties, the issues, and the desired outcome, I can use this information to generate a list of potential arbitral seats and arbitral rules.  I can also provide an assessment of each option, based on my knowledge of the data.

It is important to note that I am not a substitute for human judgment.  I can only provide information and recommendations.  The final decision of the arbitral seat or arbitral rules should be made by the parties, in consultation with their lawyers.

Here are some of the benefits of using AI to propose an arbitral seat or set of rules:

Increased efficiency:  AI can help parties to quickly and easily identify the most relevant data and information.  This can save time and resources.

Improved accuracy:  AI can help parties to identify patterns and trends that may not be obvious to humans.  This can improve the accuracy of their decision-making.

Reduced bias:  AI can help to reduce bias in the decision-making process.  This is because they are not influenced by human emotions or prejudices.

Here are some of the risks of using AI to propose an arbitral seat or set of rules:

Data reliability:  The reliability of the data used by AI tools is important.  If the data is not reliable, the results of the tools may not be accurate.

Algorithmic bias:  AI tools can be biased, depending on the data they are trained on.  This is a risk that should be carefully considered before using these tools.

Lack of transparency:  AI tools can be opaque, making it difficult to understand how they make decisions.  This can be a problem if parties want to understand the basis for the decision.

Overall, AI can be a valuable tool to help parties to select the arbitral seat or arbitral rules for their dispute.  However, it is important to be aware of the potential risks and benefits before using these tools.

8. Could parties use data mining, text analytics, or generative AI tools to draft an arbitration clause either from the beginning or by amending an existing template?

Yes, parties could use data mining, text analytics, or generative AI tools to draft an arbitration clause either from the beginning or by amending an existing template.

These tools can be used to analyze past arbitral awards and other legal documents to identify trends and patterns.  This information could then be used to draft a clause that is tailored to the specific needs of the parties.

For example, data mining could be used to identify the most common clauses used in arbitration agreements.  Text analytics could be used to analyze the language of these clauses to identify key terms and concepts.  Generative AI could be used to generate new clauses based on the data and analysis.

It is important to note that AI tools are not perfect and they should not be used as the sole basis for drafting an arbitration clause.  However, they can be a valuable tool to help parties to create a clause that is clear, concise, and enforceable.

Here are some of the potential benefits of using AI to draft an arbitration clause:

Increased efficiency:  AI tools can help parties to quickly and easily identify the most relevant data and information.  This can save time and resources.

Improved accuracy:  AI tools can help parties to identify patterns and trends that may not be obvious to humans.  This can improve the accuracy of their drafting.

Reduced bias:  AI tools can help to reduce bias in the drafting process.  This is because they are not influenced by human emotions or prejudices.

Here are some of the risks of using AI to draft an arbitration clause:

Data reliability:  The reliability of the data used by AI tools is important.  If the data is not reliable, the results of the tools may not be accurate.

Algorithmic bias:  AI tools can be biased, depending on the data they are trained on.  This is a risk that should be carefully considered before using these tools.

Lack of transparency:  AI tools can be opaque, making it difficult to understand how they make decisions.  This can be a problem if parties want to understand the basis for the decision.

Overall, AI can be a valuable tool to help parties to draft an arbitration clause.  However, it is important to be aware of the potential risks and benefits before using these tools.

Would you be able to draft an arbitration clause for a given dispute based on your training data and data input by a given stakeholder?

Yes, I could draft an arbitration clause for a given dispute based on my training data and data input by a given stakeholder.

I am trained on a massive dataset of arbitral awards, legal documents, and other information related to international arbitration.  This data includes information on the parties, the issues, the outcome of the cases, and the arbitration clauses that were used.

I can use this data to identify factors that are associated with successful arbitration clauses.  For example, I can identify clauses that are clear, concise, and enforceable.

I can also use this data to assess the strengths and weaknesses of different arbitration clauses.  For example, I can consider the flexibility of the clause, the cost of arbitration, and the likelihood of enforcement.

If a stakeholder provides me with information about their dispute, such as the parties, the issues, and the desired outcome, I can use this information to generate a draft arbitration clause.  I can also provide an assessment of the clause, based on my knowledge of the data.

It is important to note that I am not a substitute for human judgment.  I can only provide information and recommendations.  The final decision of the arbitration clause should be made by the parties, in consultation with their lawyers.

Here are some of the benefits of using AI to draft an arbitration clause:

Increased efficiency:  AI can help parties to quickly and easily identify the most relevant data and information.  This can save time and resources.

Improved accuracy:  AI can help parties to identify patterns and trends that may not be obvious to humans.  This can improve the accuracy of their drafting.

Reduced bias:  AI can help to reduce bias in the drafting process.  This is because they are not influenced by human emotions or prejudices.

Here are some of the risks of using AI to draft an arbitration clause:

Data reliability:  The reliability of the data used by AI tools is important.  If the data is not reliable, the results of the tools may not be accurate.

Algorithmic bias:  AI tools can be biased, depending on the data they are trained on.  This is a risk that should be carefully considered before using these tools.

Lack of transparency:  AI tools can be opaque, making it difficult to understand how they make decisions.  This can be a problem if parties want to understand the basis for the decision.

Overall, AI can be a valuable tool to help parties to draft an arbitration clause.  However, it is important to be aware of the potential risks and benefits before using these tools.

9. Could stakeholders in arbitration use data mining, text analytics, or generative AI tools to assess a potential conflict of interest between either counsel and a party or an arbitrator with parties, counsel, experts, or other arbitrators?

Yes, stakeholders in arbitration could use data mining, text analytics, or generative AI tools to assess a potential conflict of interest between either counsel and a party or an arbitrator with parties, counsel, experts, or other arbitrators.

These tools can be used to analyze past cases and other data to identify factors that are associated with conflicts of interest.  This information could then be used to assess the likelihood of a conflict of interest in a particular case.

For example, data mining could be used to identify cases where counsel has represented both parties in the same dispute.  Text analytics could be used to analyze the language of arbitration agreements to identify clauses that may create conflicts of interest.  Generative AI could be used to generate hypothetical scenarios to explore the potential for conflicts of interest.

It is important to note that AI tools are not perfect and they should not be used as the sole basis for assessing conflicts of interest.  However, they can be a valuable tool to help stakeholders to identify potential conflicts of interest and to take steps to mitigate them.

Here are some of the benefits of using AI to assess conflicts of interest:

Increased efficiency:  AI can help stakeholders to quickly and easily identify the most relevant data and information.  This can save time and resources.

Improved accuracy:  AI can help stakeholders to identify patterns and trends that may not be obvious to humans.  This can improve the accuracy of their assessment.

Reduced bias:  AI can help to reduce bias in the assessment process.  This is because they are not influenced by human emotions or prejudices.

Here are some of the risks of using AI to assess conflicts of interest:

Data reliability:  The reliability of the data used by AI tools is important. If the data is not reliable, the results of the tools may not be accurate.

Algorithmic bias:  AI tools can be biased, depending on the data they are trained on.  This is a risk that should be carefully considered before using these tools.

Lack of transparency:  AI tools can be opaque, making it difficult to understand how they make decisions.  This can be a problem if stakeholders want to understand the basis for the decision.

Overall, AI can be a valuable tool to help stakeholders to assess conflicts of interest.  However, it is important to be aware of the potential risks and benefits before using these tools.

Soft-law documents that refer to conflict of interest often use vague terms like “significant interest in a party,” “regularly advises,” “close relationship,” “significant commercial relationship,” “related issue,” “significant fees or other revenues,” “regular basis.”  Do you think that generative AI could replace a human in assessing whether a given situation qualifies under vague terms?

It is unlikely that generative AI could replace a human in assessing whether a given situation qualifies under vague terms like “significant interest in a party,” “regularly advises,” “close relationship,” “significant commercial relationship,” “related issue,” “significant fees or other revenues,” “regular basis” in the near future.

These terms are often used in soft-law documents because they are difficult to define precisely.  This is because the meaning of these terms can vary depending on the specific circumstances.  For example, what constitutes a “significant interest”  in a party may depend on the size of the interest, the nature of the interest, and the relationship between the party and the arbitrator.

Generative AI is a powerful tool that can be used to process large amounts of data and identify patterns.  However, it is not yet capable of understanding the nuances of human language and the context in which these terms are used.  As a result, it is likely that AI would need to be supplemented by human judgment in order to assess whether a given situation qualifies under these vague terms.

Here are some of the reasons why generative AI is unlikely to replace a human in assessing conflict of interest:

Vagueness of the terms:  The terms used in soft-law documents to define conflict of interest are often vague and open to interpretation.  This means that it is difficult for AI to determine whether a given situation qualifies under these terms.

Complexity of the factors involved:  The assessment of conflict of interest involves a complex set of factors, such as the nature of the relationship between the arbitrator and the party, the size and nature of the interest, and the potential for bias.  AI is not yet capable of considering all of these factors in making a decision.

Need for human judgment:  The assessment of conflict of interest requires human judgment.  This is because there are often no clear-cut rules that can be applied to every situation.  AI is not yet capable of exercising human judgment.

Overall, it is unlikely that generative AI will replace a human in assessing conflict of interest in the near future.  However, AI can be a valuable tool to assist humans in this process.

10. Could parties use data mining, text analytics, or generative AI tools to select an arbitrator for their dispute?

Yes, parties could use data mining, text analytics, or generative AI tools to select an arbitrator for their dispute.

These tools can be used to analyze past cases and other data to identify factors that are associated with effective arbitrators.  This information could then be used to assess the suitability of different arbitrators for a particular case.

For example, data mining could be used to identify arbitrators who have a high success rate in cases involving similar issues.  Text analytics could be used to analyze the decisions of arbitrators to identify their styles and approaches.  Generative AI could be used to generate hypothetical scenarios to explore the potential for different arbitrators to handle a particular case.

It is important to note that AI tools are not perfect and they should not be used as the sole basis for selecting an arbitrator.  However, they can be a valuable tool to help parties to identify potential arbitrators and to make informed decisions.

Here are some of the benefits of using AI to select an arbitrator:

Increased efficiency:  AI can help parties to quickly and easily identify the most relevant data and information.  This can save time and resources.

Improved accuracy:  AI can help parties to identify patterns and trends that may not be obvious to humans.  This can improve the accuracy of their selection.

Reduced bias:  AI can help to reduce bias in the selection process.  This is because they are not influenced by human emotions or prejudices.

Here are some of the risks of using AI to select an arbitrator:

Data reliability:  The reliability of the data used by AI tools is important.  If the data is not reliable, the results of the tools may not be accurate.

Algorithmic bias:  AI tools can be biased, depending on the data they are trained on.  This is a risk that should be carefully considered before using these tools.

Lack of transparency:  AI tools can be opaque, making it difficult to understand how they make decisions.  This can be a problem if parties want to understand the basis for the decision.

Overall, AI can be a valuable tool to help parties to select an arbitrator.  However, it is important to be aware of the potential risks and benefits before using these tools.

Would these tools consider that arbitrators might change their mind or have rendered an award because of the particular circumstances of the case or do not entirely agree with every word drafted by their co-arbitrators?

It is unlikely that AI tools would be able to fully consider that arbitrators might change their mind or have rendered an award because of particular circumstances of the case or do not entirely agree with every word drafted by their co-arbitrators.

This is because AI tools are trained on data that is typically limited to the facts and legal arguments of the case.  They are not able to understand the nuances of human behavior, such as the reasons why an arbitrator might change their mind or the dynamics of the arbitral process.

Additionally, AI tools are often trained on data that is aggregated across many cases.  This means that they may not be able to accurately predict the behavior of an individual arbitrator in a particular case.

However, AI tools can be used to identify factors that may be associated with arbitrators changing their minds or not agreeing with their co-arbitrators.  For example, data mining could be used to identify cases where arbitrators have changed their minds after hearing new evidence.  Text analytics could be used to analyze the decisions of arbitrators to identify patterns of disagreement.

This information could then be used by parties and arbitrators to make informed decisions about the arbitration process.  For example, parties could use this information to select arbitrators who are less likely to change their minds or to agree with their co-arbitrators.  Arbitrators could use this information to manage the arbitral process in a way that minimizes the risk of disagreement.

Overall, AI tools can be a valuable tool to help parties and arbitrators understand the factors that may influence the behavior of arbitrators.  However, it is important to remember that AI tools are not perfect and they should not be used as the sole basis for making decisions.

11. Could parties (or e.g., litigation financers) use data mining, text analytics, or generative AI tools to assess the chances of winning a case in international arbitration?

Yes, parties (or e.g., litigation financers) could use data mining, text analytics, or generative AI tools to assess the chances of winning a case in international arbitration.

These tools can be used to analyze past cases and other data to identify factors that are associated with winning cases.  This information could then be used to assess the likelihood of winning a particular case.

For example, data mining could be used to identify cases where parties with similar facts and legal arguments have won.  Text analytics could be used to analyze the decisions of arbitrators to identify patterns of reasoning.  Generative AI could be used to generate hypothetical scenarios to explore the potential for different outcomes.

It is important to note that AI tools are not perfect and they should not be used as the sole basis for assessing the chances of winning a case.  However, they can be a valuable tool to help parties to make informed decisions about whether to pursue arbitration and how to best prepare their case.

Here are some of the benefits of using AI to assess the chances of winning a case:

Increased efficiency:  AI can help parties to quickly and easily identify the most relevant data and information.  This can save time and resources.

Improved accuracy:  AI can help parties to identify patterns and trends that may not be obvious to humans.  This can improve the accuracy of their assessment.

Reduced bias:  AI can help to reduce bias in the assessment process.  This is because they are not influenced by human emotions or prejudices.

Here are some of the risks of using AI to assess the chances of winning a case:

Data reliability:  The reliability of the data used by AI tools is important.  If the data is not reliable, the results of the tools may not be accurate.

Algorithmic bias:  AI tools can be biased, depending on the data they are trained on.  This is a risk that should be carefully considered before using these tools.

Lack of transparency:  AI tools can be opaque, making it difficult to understand how they make decisions.  This can be a problem if parties want to understand the basis for the decision.

Overall, AI can be a valuable tool to help parties assess the chances of winning a case.  However, it is important to be aware of the potential risks and benefits before using these tools.

What documents or information would the parties need to deliver for the AI to be able to do so?

The specific documents or information that parties would need to deliver to AI to assess the chances of winning a case would depend on the specific AI tool being used.  However, some common types of documents and information that may be required include:

The arbitration agreement

The pleadings and evidence filed by the parties

The decisions of arbitrators in similar cases

The laws and regulations that are relevant to the dispute

The expert reports that have been submitted by the parties

In addition to these documents and information, the AI tool may also require information about the parties, such as their financial resources and their experience in arbitration.

It is important to note that not all AI tools will require the same information.  Some tools may be able to work with a limited amount of data, while others may require a more comprehensive dataset.  It is important to consult with the developer of the AI tool to determine the specific requirements.

Here are some of the benefits of providing AI with access to documents and information:

Increased accuracy:  AI can be more accurate in its assessments when it has access to more data.

Reduced bias:  AI can be less biased when it has access to a more diverse dataset.

Improved efficiency:  AI can process large amounts of data more quickly and efficiently than humans.

However, there are also some risks associated with providing AI with access to documents and information:

Data privacy:  AI tools may collect and store sensitive data about the parties.  This data could be used for unauthorized purposes.

Security:  AI tools could be hacked or compromised, which could lead to the disclosure of confidential information.

Bias:  AI tools could be biased, depending on the data they are trained on.  This could lead to inaccurate assessments.

It is important to weigh the benefits and risks before deciding whether to provide AI with access to documents and information.

How would AI tackle the use of vague legal definitions (e.g., “foreseeable,” “fault,” “reasonable”)?  Does the understanding of these vague terms and their application to a given case vary significantly depending on a given case, its facts and applicable law?

AI can tackle the use of vague legal definitions in a number of ways.  One way is to use text analytics to identify patterns in the way that these terms have been used in past cases.  This information can then be used to develop a more precise definition of the term.

For example, the term “foreseeable” can be defined as “capable of being foreseen or predicted.”  However, what is considered “foreseeable” can vary depending on the specific facts of the case.  Text analytics can be used to identify patterns in the way that this term has been used in past cases, such as the types of events that have been considered foreseeable and the level of certainty that is required.  This information can then be used to develop a more precise definition of the term that is tailored to the specific context of the arbitration.

Another way that AI can tackle the use of vague legal definitions is to use machine learning to generate predictions about how these terms will be interpreted in a given case.  This can be done by training an AI model on a dataset of past cases in which these terms have been used.  The model can then be used to predict how these terms will be interpreted in a new case.

It is important to note that the understanding of vague terms and their application to a given case can vary significantly depending on a given case, its facts and applicable law.  This is because the meaning of these terms is often context-dependent.  As a result, it is important to use AI tools in conjunction with human judgment when interpreting these terms.

Here are some of the benefits of using AI to tackle the use of vague legal definitions:

Increased accuracy:  AI can be more accurate in its interpretations of vague terms when it is trained on a large dataset of past cases.

Reduced bias:  AI can be less biased in its interpretations of vague terms when it is trained on a diverse dataset of past cases.

Improved efficiency:  AI can interpret vague terms more quickly and efficiently than humans.

However, there are also some risks associated with using AI to tackle the use of vague legal definitions:

Data reliability:  The reliability of the data used to train AI models is important.  If the data is not reliable, the results of the models may not be accurate.

Algorithmic bias:  AI models can be biased, depending on the data they are trained on.  This is a risk that should be carefully considered before using these models.

Lack of transparency:  AI models can be opaque, making it difficult to understand how they make decisions.  This can be a problem if parties want to understand the basis for the decision.

Overall, AI can be a valuable tool to help arbitrators interpret vague legal definitions.  However, it is important to be aware of the potential risks and benefits before using these tools.

12. Could arbitrators use data mining, text analytics, or generative AI tools to examine the evidence in international arbitration or in drafting awards or other documents?

Yes, arbitrators could use data mining, text analytics, or generative AI tools to examine the evidence in international arbitration or drafting awards, or other documents.

These tools can be used to identify patterns and trends in the evidence, such as the frequency of certain words or phrases, or the relationships between different pieces of evidence.  This information can then be used to help arbitrators understand the evidence and to make decisions about the case.

For example, data mining could be used to identify cases where arbitrators have awarded damages based on similar evidence.  Text analytics could be used to analyze the evidence to identify key terms and concepts.  Generative AI could be used to generate hypothetical scenarios to explore the potential implications of the evidence.

It is important to note that AI tools are not perfect and they should not be used as the sole basis for making decisions in arbitration.  However, they can be a valuable tool to help arbitrators to understand the evidence and to make informed decisions.

Here are some of the benefits of using AI to examine the evidence in international arbitration:

Increased efficiency:  AI can help arbitrators to quickly and easily identify the most relevant evidence.

Improved accuracy:  AI can help arbitrators to identify patterns and trends in the evidence that may not be obvious to humans.

Reduced bias:  AI can help to reduce bias in the assessment of the evidence.  This is because they are not influenced by human emotions or prejudices.

Here are some of the risks of using AI to examine the evidence in international arbitration:

Data reliability:  The reliability of the data used by AI tools is important.  If the data is not reliable, the results of the tools may not be accurate.

Algorithmic bias:  AI tools can be biased, depending on the data they are trained on.  This is a risk that should be carefully considered before using these tools.

Lack of transparency:  AI tools can be opaque, making it difficult to understand how they make decisions.  This can be a problem if arbitrators want to understand the basis for the decision.

Overall, AI can be a valuable tool to help arbitrators examine the evidence in international arbitration.  However, it is important to be aware of the potential risks and benefits before using these tools.

13. Could parties use data mining, text analytics, or generative AI tools to draft more persuasive submissions in arbitration?

Yes, parties could use data mining, text analytics, or generative AI tools to draft more persuasive submissions in arbitration.

These tools can be used to identify patterns and trends in past cases, such as the types of arguments that have been successful and the language that has been used.  This information can then be used to help parties draft submissions that are more likely to be persuasive.

For example, data mining could be used to identify cases where parties have won by arguing a particular point.  Text analytics could be used to analyze the language of successful submissions to identify key terms and concepts.  Generative AI could be used to generate hypothetical scenarios to explore the potential implications of the arguments.

It is important to note that AI tools are not perfect and they should not be used as the sole basis for drafting submissions in arbitration.  However, they can be a valuable tool to help parties to draft more persuasive submissions.

Here are some of the benefits of using AI to draft submissions in arbitration:

Increased efficiency:  AI can help parties to quickly and easily identify the most relevant arguments and language.

Improved accuracy:  AI can help parties to identify patterns and trends in past cases that may not be obvious to humans.

Reduced bias:  AI can help to reduce bias in the drafting of submissions.  This is because they are not influenced by human emotions or prejudices.

Here are some of the risks of using AI to draft submissions in arbitration:

Data reliability:  The reliability of the data used by AI tools is important.  If the data is not reliable, the results of the tools may not be accurate.

Algorithmic bias:  AI tools can be biased, depending on the data they are trained on.  This is a risk that should be carefully considered before using these tools.

Lack of transparency:  AI tools can be opaque, making it difficult to understand how they make decisions.  This can be a problem if arbitrators want to understand the basis for the decision.

Overall, AI can be a valuable tool to help parties draft more persuasive submissions in arbitration.  However, it is important to be aware of the potential risks and benefits before using these tools.

14. Could arbitral institutions use data mining, text analytics, or generative AI tools to critically scrutinize an arbitral award?

Yes, arbitral institutions could use data mining, text analytics, or generative AI tools to critically scrutinize an arbitral award.

These tools can be used to identify patterns and trends in arbitral awards, such as the types of awards that are most likely to be challenged or the language that is most commonly used in awards.  This information can then be used to help arbitral institutions improve the quality of awards.

For example, data mining could be used to identify cases where awards have been challenged and the reasons for the challenges.  Text analytics could be used to analyze the language of awards to identify potential ambiguities or inconsistencies.  Generative AI could be used to generate hypothetical scenarios to explore the potential implications of the award.

It is important to note that AI tools are not perfect and they should not be used as the sole basis for scrutinizing an arbitral award.  However, they can be a valuable tool to help arbitral institutions improve the quality of awards.

Here are some of the benefits of using AI to scrutinize arbitral awards:

Increased efficiency:  AI can help arbitral institutions to quickly and easily identify potential problems with awards.

Improved accuracy:  AI can help arbitral institutions to identify patterns and trends in awards that may not be obvious to humans.

Reduced bias:  AI can help to reduce bias in the scrutiny of awards.  This is because they are not influenced by human emotions or prejudices.

Here are some of the risks of using AI to scrutinize arbitral awards:

Data reliability:  The reliability of the data used by AI tools is important.  If the data is not reliable, the results of the tools may not be accurate.

Algorithmic bias:  AI tools can be biased, depending on the data they are trained on.  This is a risk that should be carefully considered before using these tools.

Lack of transparency:  AI tools can be opaque, making it difficult to understand how they make decisions.  This can be a problem if arbitral institutions want to understand the basis for the decision.

Overall, AI can be a valuable tool to help arbitral institutions scrutinize arbitral awards.  However, it is important to be aware of the potential risks and benefits before using these tools.

Conclusions & Takeaways

A. Hypotheses Confirmed

As to the authors’ working hypotheses, the experiment confirmed that generative AI is a tool unready to make independent decisions in the arbitral process.  In fact, this was also an independent conclusion (or disclosure?) that was made by both ChatGPT as well as Bard.  The experiment also confirmed that the data provided by generative AI based on the same question may vary, though not significantly.  Additionally, AI’s replies were often repetitive, based on the same (albeit relevant) arguments.  Finally, although generative AI may “hallucinate,” we have identified only a few significant visible hallucinations in our experiment.

B. Style & Form of Answering

First, in terms of the comparison between ChatGPT 4.0 and Bard, it seems that only the former succeeded in matching the prompt, that is, an interview style of answering and word limit.  Bard also appeared to misread the intentions behind the questions.  For example, when asked about the ways arbitration’s general principles could shape the use of the technology in arbitration, apart from giving a direct answer, Bard also presented how one could use the technology in arbitration at all.  Bard also showed a tendency to provide additional, general answers at the end of answering a given question (“Here are some additional thoughts on . . .”).  Bard also appeared to provide more qualifiers and/or was more evasive in its responses (e.g., “decision of whether or not . . . is a complex one.”).

We noted a difference in style while providing answers.  While ChatGPT 4.0 presented the pros and cons of a given solution or its benefits and challenges, Bard phrased it as a dilemma (“There is no consensus on whether the use of AI tools should be allowed in international commercial arbitration.  Some people believe that the use of AI tools should be allowed, as they can help to make the arbitration process more efficient and effective.  Others believe that . . .”).  It is a subjective interpretation, but we regarded Bard’s answers as less conclusive.  The structure of Bard’s answers was stylistically more repetitive:  at first, it gave a seemingly direct response (“Sure, here are my thoughts on . . .”), which then would be followed by more general reflections that may or may not have been relevant (“Here are some additional thoughts”).  Bard also gave the same piece of an answer to several questions, e.g., while describing a particular drawback of using technology.  Overall, we regarded ChatGPT’s style of answering as more “mature” and balanced.

Additionally, Bard seemed to give more general answers, referring to data mining, text analytics, and AI, while ChatGPT 4.0 focused on AI, although the questions referred to all three solutions.

C. Hallucinations & Questionable Answers

That said, Bard seemed to present sometimes nonsensical answers, e.g., referring to arbitrators who have “a high success rate” as if they were lawyers who “win” or “lose” a case.  Anyone dealing with arbitration can confirm that this is not an answer a human would give.  Additionally, when asked whether the parties could use data mining, text analytics, or generative AI tools to select the arbitral seat or arbitral rules for their dispute, Bard answered that generative AI:  “can be used to generate creative text formats, such as poems, code, scripts, musical pieces, email, letters, etc.  This could be used to create hypothetical scenarios and to explore different options for selecting the arbitral seat or arbitral rules.”  Reference to poems or musical pieces limits the persuasiveness of Bard’s answers.

Similarly, Bard gave a somewhat controversial suggestion when asked about the potential use of AI in international commercial arbitration:  “Generative AI can be used to create new documents, such as witness statements and expert reports.  This can help to streamline the arbitration process and reduce the cost of legal fees.”  It is evident that generating witness statements or expert reports based on AI findings and not actual knowledge of facts or expert knowledge remains controversial, to say the least.  Allowing such a phenomenon would likely affect the legitimacy of international arbitration.

We have not identified significant hallucinations in ChatGPT answers.  However, while assessing the use of AI by arbitrators, ChatGPT suggested that in such a scenario, “parties must have the chance to challenge or question its outputs.”  Bard made a similar suggestion.  This seems impractical as the arbitrators evaluate the evidence in the award.  However, if by “challenge or question” one should understand it to mean “comment,” then the suggestion seems more feasible and sensible.

Finally, Bard was also wrong in concluding that “Arbitral awards are typically confidential, and this is especially true for ITA [international treaty arbitration] [sic] cases.  This means that fewer arbitral awards are published from ITA cases.”  Bard’s recommendations were also occasionally so general that they were defective.  For example, it suggested that AI would not make decisions for arbitration users, but “AI can be used to support negotiations between the parties.  This can help to reach a settlement more quickly and efficiently.”  How that would work remains a mystery.

D. AI as (Merely) an Aid

ChatGPT 4.0 confirmed all of the theses of our previous research concerning the use of technology in arbitration, including (i) the observation of a lack of regulatory framework and (ii) the conclusion that technology as such may be an aid for arbitration stakeholders, but could not substitute them in the proceedings:1Similar conclusions were reached recently by Christophe Dugué, who asked Bing GPT to prepare a memo answering similar questions. See Christophe Dugué, Can AI Replace Lawyers in International Arbitration? Or be used in International Arbitration – A [guided] interview of a generative AI, LinkedIn Pulse (June 30, 2023), https://www.linkedin.com/pulse/can-ai-replace-lawyers-international-arbitration-used-dugu%C3%A9/.1  AI “should serve as an assistant to human decision-makers rather than replace them.  The unique characteristics of human judgment remain vital in the intricate and nuanced world of arbitration.”2Full Text of the Authors’ Interviews with Generative Artificial Intelligence, Annex, at 12.2  Consequently, technology could aid decision-making in many fields but not make decisions on its own. Bard’s answer even went further, suggesting that AI “is unlikely to ever make the decisions for [arbitration users].”

ChatGPT acknowledged the shortcomings of using technology in arbitration, particularly the lack of available and comparable data due to confidentiality (particularly in commercial arbitration) and decentralization.  ChatGPT also pointed to the possible bias of AI technology that could lead to favoring one party over another.  It always added an important caveat that any suggestion “should be complemented with expertise from experienced arbitration practitioners” that could offer “subjective assessment.”  For instance, AI could help flag potential conflicts of interest, but a lawyer would need to make a final decision based on contextual understanding and human judgment.  ChatGPT even labeled human judgment and expertise as “foundational to arbitration’s integrity and effectiveness” given the necessity to navigate the “specifics of the case, parties’ intentions and the nuances of the law.”  To put it in context, ChatGPT confirmed that AI would be highly productive in creating a shortlist of possible candidates for arbitrators, but a lawyer selects the best person.

Similarly, Bard underlined that when provided with appropriate data, it could generate a list of potential arbitral seats and rules and assess the options.  Bard emphasized that AI could be useful in identifying “patterns and trends” which should be used “in conjunction with the judgment of human experts.”  It even clarified that “AI can be used to assist stakeholders in arbitration in a variety of ways, but it is unlikely to ever make the decisions for them.”  It flagged that AI is imperfect, is not always objective (may be biased), and lacks common sense.

The above would render it impossible for generative AI to understand the nuances of a complex decision-making process, e.g., arbitrators changing their minds or rendering an award while not entirely agreeing with every word of the award.  However, Bard suggested that it could provide data that parties could use “to select arbitrators who are less likely to change their minds or to agree with their co-arbitrators.  Arbitrators could use this information to manage the arbitral process in a way that minimizes the risk of disagreement.”  Potential use of technology and arbitration by arbitrators or arbitral instructions providing scrutiny was reinforced by Bard’s assessment that “data mining could be used to identify cases where awards have been challenged and the reasons for the challenges.  Text analytics could be used to analyze the language of awards to identify potential ambiguities or inconsistencies.”

This combination of possible instantaneous analysis of vast amounts of data with human expertise may create the picture of future arbitration lawyers.  An important observation was made by Bard when it comes to the availability of data that AI analyzes.  Bard noted that:  “the data that is available is also not very diverse.  This is because most of the data is from cases involving Western countries.”  Although this statement might not be entirely correct, one may certainly observe that the majority of data/knowledge in international arbitration (be it available arbitral awards or scholarly writing) originates from the legal tradition of the Global North, which might indeed affect the outcome of the analysis.

E. "Autocriticism"

Interestingly, ChatGPT 4.0 put a great deal of emphasis on some of the drawbacks of using technology in arbitration.  First, it raised transparency, encouraging disclosure of the use of AI tools to the other parties to give them “the opportunity to understand, challenge, or potentially utilize similar tools.”  Consequently, ChatGPT underlined the necessity to ensure “there are no ‘black box’ operations” known only to one party or arbitrators.  However, the interview did not solve the authors’ dilemma about potential inequalities of arms in terms of parties’ funding or tech-savviness.

Second, ChatGPT flagged several concerns referring, e.g., to possible data security breaches and the use of the data by third parties.  ChatGPT also identified the possibility that “even if data is encrypted and anonymized, there’s uncertainty about future technologies that might be able to de-anonymize or decrypt data.”  It concluded that “stakeholders should be cautious when inputting sensitive data into AI tools.”  The authors regard this perspective as quite bleak but not altogether improbable.

F. Future of Legal Framework for Generative AI

Neither ChatGPT nor Bard was particularly helpful in advising on regulating the use of AI in international arbitration.  For example, ChatGPT suggested that AI in international arbitration should be regulated at least on the soft-law level.  At the same time, the way forward it suggested is (i) to maintain confidentiality of the process on the one hand and (ii) to ensure “transparency and understandability” of the AI role in the process.  At first glance, these recommendations might not appear internally consistent unless confidentiality is understood as being secured vis-à-vis third parties and transparency is understood as being towards the participants of a single arbitral proceeding.

When asked normative questions (“should . . .”), ChatGPT recommends the use of technology in international arbitration but advocates full disclosure of its use (“It’s advisable to disclose the use of AI tools in arbitration”).  One of the reasons mentioned by ChatGPT is the following:  “Disclosure ensures both parties are on an equal footing.  If one party employs AI tools, the other should have the opportunity to understand, challenge, or potentially utilize similar tools.”  While a possibility to object or challenge the use is something that every litigator will agree with, one might wonder what the “opportunity to understand” means within the adversarial process of international arbitration.

Summary

In summary, our exercise confirmed that various aspects of technology (as always) may be a valuable help to a lawyer.  However, the human factor will always remain crucial in resolving a dispute.  No one imagines a lawyer who does not use email or a phone.  But surely one can imagine a successful lawyer who could not send a fax.  This would be unimaginable twenty years ago.  Half a century ago, no lawyer had ever heard about a fax.  This confirms that technology changes, and prudent lawyers should identify the changes and use them to satisfy clients’ needs.  At the same time, a lawyer’s intuition, knowledge, and common sense are crucial and will always be a deciding factor in why clients should rely on the lawyer’s skill and judgment rather than merely a product of an algorithm.

Endnotes

1Jinzhe Tan et al., ChatGPT as an Artificial Lawyer?, in AI4AJ 2023, Workshop on Artificial Intelligence for Access to Justice 7 (L. Karl Branting, ed., 2023), CEUR Workshop Proceedings, https://ceur-ws.org/Vol-3435/short2.pdf.
2See, e.g., Claire Morel de Westgaver, Canvassing Views on AI in IA: The Rise of Machine Learning, Kluwer Arbitration Blog (July 12, 2023), https://arbitrationblog.kluwerarbitration.com/2023/07/12/canvassing-views-on-ai-in-ia-the-rise-of-machine-learning.
3See Erik Brynjolfsson et al., Generative AI at Work 1 n.1 (Nat'l Bureau of Econ. Rsch., Working Paper No. 31161 2023).
4See, e.g., Andrzej Kisielewicz, Bajki o sztucznej inteligencji i prawdziwe zagrożenia [Fairy tales of AI and real threats], Wszystko Co Najważniejsze (July 18, 2023), https://wszystkoconajwazniejsze.pl/andrzej-kisielewicz-bajki-o-sztucznej-inteligencji-i-prawdziwe-zagrozenia/.
5Sam S. Adams et al., Mapping the Landscape of Human-Level Artificial General Intelligence, AI Mag., Spring 2012, at 25, 26.
6See generally Piotr Wilinski & Maciej Durbas, Data Mining, Text Analytics and International Commercial Arbitration, in International Arbitration and Technology (Pietro Ortolani et al. eds., 2021).
7See Sai Anirudh Athaluri et al., Exploring the Boundaries of Reality: Investigating the Phenomenon of Artificial Intelligence Hallucination in Scientific Writing Through ChatGPT References, 15 Cureus 1 (2023).
8Similar conclusions were reached recently by Christophe Dugué, who asked Bing GPT to prepare a memo answering similar questions. See Christophe Dugué, Can AI Replace Lawyers in International Arbitration? Or be used in International Arbitration – A [guided] interview of a generative AI, LinkedIn Pulse (June 30, 2023), https://www.linkedin.com/pulse/can-ai-replace-lawyers-international-arbitration-used-dugu%C3%A9/.
9Full Text of the Authors’ Interviews with Generative Artificial Intelligence, Annex, at 12.
Back to Top
About the Contributors
generic image

Dr. Maciej Durbas is a partner and co-leader of the arbitration practice at Kubas Kos Gałkowski.  Maciej has extensive experience in arbitration as an arbitrator and counsel in numerous proceedings before Polish and foreign arbitral institutions. Maciej is a member of the Arbitration Committee of the Confederation Lewiatan Court of Arbitration.  He is also enlisted as a recommended arbitrator in several institutions in Poland and abroad.

Maciej has defended his Ph.D dissertation (cum laude) on jurisdiction of arbitral tribunals. Moreover, he has been coaching Jagiellonian University of Kraków’s team for over a decade in the Willem C. Vis Moot competition.

generic image

Dr. Piotr Wiliński specializes in international arbitration.  He is an Assistant Professor at the Commercial Law Department and teaches international commercial arbitration as well as investment arbitration.  His research focuses on the powers of arbitral tribunals.  Dr. Wiliński acts as an Academic Coordinator of International Arbitration and Business Law variation of Commercial Law LL.M. and he coaches Erasmus University's team for the Willem C. Vis International Commercial Arbitration Moot Court Competition.

Dr Wiliński works as a Managing Associate (Knowledge) in Litigation, Arbitration & Investigation practice group of Linklaters, where he is involved in arbitration proceedings under various arbitration rules.  Piotr conducts research on complex legal issues and analyses developments relevant to his practice group.  He contributes to Linklaters’ internal courses and academic publications.  Piotr regularly publishes articles on international arbitration and is a member of RAI, Young ICCA, ICC YAF and LCIA YIAG amongst others.