Acta Scientific Computer Sciences

Review Article Volume 4 Issue 7

New Financial Risks Arising from Digital Finance: Disputes Over Automated Decision Systems and Algorithmic Assessments by ICT Forensic Expert Witnesses

Stephen Castell*

Chairman, CASTELL Consulting, UK

*Corresponding Author: Stephen Castell, Chairman, CASTELL Consulting, UK.

Received: April 15, 2022; Published: June 09, 2022

Abstract

Use of Artificial Intelligence (AI) and Machine Learning in the deployment of Automated Decision Systems (ADS), with computer software-implemented algorithms, or ‘algos’, now spreading widely in financial trading and other systems, inevitably mean that new financial risks are arising from such increasing reliance on digital finance. Disputes over the use, and the damaging consequences, of ADS are therefore likely to escalate, and ICT Expert Witness Professionals will doubtless become involved in forensic assessments of such algorithmic disputes. This paper first presents a review of published work in regard to ADS and the use of ‘algos’, noting growing concern over specific ‘bias’ in, and, more generally, the ‘ethics’ of, algorithmic decision-making systems, with the use of ADS for Cybersecurity and Infrastructure Security a standout application area. In regard to digital finance, the author recently gave sworn testimony as expert witness in a USA Financial Industry Regulatory Authority (FINRA) Arbitration hearing, in a dispute over use of an ADS by a major US fund management corporation to close-out the investment trading position of a client, with heavy losses, and this paper sets out the anonymized substance of that testimony. The issues raised in that case will increasingly feature in the financial investment world, and also in society generally, and in government, industry, and commerce. Care should be taken professionally when issues of ‘bias’ or ‘ethics’ in algorithms are raised. Legal professionals must properly examine these subjective concepts within the processes of the humans who specified the Requirements for the algorithms, and not expect to find technical evidence thereof in the computer code itself. ICT professionals are furthermore increasingly concerned that a ubiquitously software-dependent, ADS-driven, society poses a real risk of financial collapse or other catastrophic consequences from software failure or disaster, on a national, or even international, scale. High-profile software-associated tragedies such as VW Dieselgate, Boeing 737 Max, and PO Horizon serve to illuminate the critical issues potentially arising from widescale ADS implementations. Expert investigations must guard against the incorrect ‘presumption of the reliability of computer evidence’ that has become routine in pleadings brought before some courts, and been accepted unchallenged by presiding judges. The IT Leaders Forum of the British Computer Society has initiated a Software Risk and Resilience Working Group to research, gather evidence, study, analyse and deliberate upon these matters, with a focus as much on ADS as on other software applications and systems deployed in the UK. All concerned professionals are welcome to engage with this Working Group.

Keywords: Artificial Intelligence (AI); Machine Learning; Automated Decision Systems; Digital; Finance; Risk; Economics; Management; ADS; Algorithm; Algos; Expert Witness; Forensic; Dispute; Litigation; Bias; Ethics; Cybersecurity; Infrastructure; Financial; Investment; Regulation; Testimony; Opinion; Lawyers; Courts; Software; Failure; Disaster; Dieselgate; Horizon; Reliability; Resilience; Applications; Systems; British Computer Society (BCS)

References

  1. GIBSON DUNN. “2021 Artificial Intelligence and Automated Systems Annual Legal Review”. January 20, 2022. ‘2021 was a busy year for policy proposals and lawmaking related to artificial intelligence (“AI”) and automated technologies. The OECD identified 700 AI policy initiatives in 60 countries, and many domestic legal frameworks are taking shape. With the new Artificial Intelligence Act, which is expected to be finalized in 2022, it is likely that high-risk AI systems will be explicitly and comprehensively regulated in the EU. … the United States has not embraced a comprehensive approach to AI regulation as proposed by the European Commission, instead focusing on defense and infrastructure investment to harness the growth of AI (2022).
  2. Misuraca G and van Noordt C. “AI Watch. Artificial Intelligence in public services. Overview of the use and impact of AI in public services in the EU”. Science for Policy Report, Luxembourg: Publications Office of the European Union, 2020, JRC120399 EUR 30255 EN, 96 pages. Gianluca Misuraca, Senior Scientist on Digital Government Transformation, European Commission's Joint Research Centre, Digital Economy Unit, Seville - AI Watch Task Leader on AI for the Public Sector; and Colin van Noordt, PhD Researcher at the Ragnar Nurkse Department of Innovation and Governance at Tallinn University of Technology (TalTech), Estonia - External expert for the AI Watch Task on AI for the Public Sector (2020).
  3. Castell S. “Direct Government by Algorithm. Towards Establishing and Maintaining Trust when Artificial Intelligence Makes the Law: a New Algorithmic Trust Compact with the People”. Acta Scientific Computer Sciences12 (2021): 04-21.
  4. “Small business advisory review panel for automated valuation model (avm) rulemaking. outline of proposals and alternatives under consideration”. February 23 (2022).
  5. Robinson S., et al. “CFPB Publishes Proposals to Prevent Algorithmic Bias in AVMs” (2022).
  6. EC, 2018: “Are there restrictions on the use of automated decision-making?”. European Commission.
  7. EC, 2018: “Article 4 Definitions ... (4) ‘profiling’ means any form of automated processing of personal data consisting of the use of personal data to evaluate certain personal aspects relating to a natural person, in particular to analyse or predict aspects concerning that natural person's performance at work, economic situation, health, personal preferences, interests, reliability, behaviour, location or movements”. GDPR, Article 4 (4) and Article 22 and Recitals (71) and (72).
  8. EC, 2018: “Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679 (wp251rev.01)”. EDPB Guidelines (EU) 2016/679.
  9. ICO, undated: “Rights related to automated decision making including profiling”. Guide to the General Data Protection Regulation (GDPR)/Individual rights.
  10. ICO, undated: “Right not to be subject to automated decision-making”. Guide to Law Enforcement Processing/Individual rights.
  11. EU Parliament and Council. “Directive (eu) 2019/1937 of the European parliament and of the COUNCIL of 23 October 2019 on the protection of persons who report breaches of Union law”. Official Journal of the European Union, L 305/17-56, 26.11. (2019): 40.
  12. BIICL and KCL. “Contesting automated decision making in legal practice: Views from practitioners and researchers”. Rapid summary report from an expert workshop: 'Contesting AI Explanations in the UK”. 6 May (2021).
  13. Castell S. “The future decisions of RoboJudge HHJ Arthur Ian Blockchain: Dread, delight or derision?”. Computer Law and Security Review4 (2018): 739-753.
  14. Association for Computing Machinery US Public Policy Council (USACM), 2017. “Statement on Algorithmic Transparency and Accountability”. (2017): 2.
  15. “How can humans keep the upper hand? The ethical matters raised by algorithms and artificial intelligence”. Report on the public debate led by the french data protection authority (cnil) as part of the ethical discussion assignment set by the digital republic bill, December (2017): 70.
  16. “Recommendation of the Council on Artificial Intelligence”. OECD/LEGAL/0449, Adopted on 22/05/2019. ‘Artificial Intelligence (AI) technologies and tools play a key role. This Recommendation provides a set of internationally-agreed principles and recommendations that can promote an AI-powered crisis response that is trustworthy and respects human-centred and democratic values (2019).
  17. “Draft text of the Recommendation on the Ethics of Artificial Intelligence”. Conference: Intergovernmental Meeting of Experts (Category II) related to a Draft Recommendation on the Ethics of Artificial Intelligence, online, 2021, Corporate author: UNESCO [62151], Document code: SHS/IGM-AIETHICS/2021/APR/4 (2021): 27.
  18. Castell S. “Forensic Systems Analysis: A Methodology for Assessment and Avoidance of IT Disasters and Disputes”. CUTTER Executive Report, Posted February 1, (2006).
  19. Sheffield M. “10 key factors to ensure software project success”. 06.25.19 (2019).
  20. Castell S. “The Fundamental Articles of I.AM Cyborg Law”. Beijing Law Review4 (2020).
  21. Castell S. “Slaying the Crypto Dragons: Towards a CryptoSure Trust Model for Crypto-economics”. 25 March 2021. Chapter in Patnaik S., Wang TS., Shen T., Panigrahi S.K. (eds) Blockchain Technology and Innovations in Business Processes. Smart Innovation, Systems and Technologies 219 (2021): 49-65.
  22. Castell S. “A trial relying on computer evidence should start with a trial of the computer evidence”. OPINION by Stephen Castell, Computer Weekly, 22 Dec 2021, “Learning from the Post Office Horizon scandal: ‘The most widespread miscarriage of justice in recent British legal history” (2021).
  23. Ringland G., et al. “BCS IT Leaders Forum Software Risk and Resilience Working Group”. WIP: “Terms of Reference for the Working Group (2022).

Citation

Citation: Stephen Castell. “New Financial Risks Arising from Digital Finance: Disputes Over Automated Decision Systems and Algorithmic Assessments by ICT Forensic Expert Witnesses". Acta Scientific Computer Sciences 4.7 (2022): 24-36.

Copyright

Copyright: © 2022 Stephen Castell. This is an open-access article distributed under the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.




Metrics

Acceptance rate35%
Acceptance to publication20-30 days

Indexed In




News and Events


  • Certification for Review
    Acta Scientific certifies the Editors/reviewers for their review done towards the assigned articles of the respective journals.
  • Submission Timeline for Upcoming Issue
    The last date for submission of articles for regular Issues is September 25, 2024.
  • Publication Certificate
    Authors will be issued a "Publication Certificate" as a mark of appreciation for publishing their work.
  • Best Article of the Issue
    The Editors will elect one Best Article after each issue release. The authors of this article will be provided with a certificate of "Best Article of the Issue"
  • Welcoming Article Submission
    Acta Scientific delightfully welcomes active researchers for submission of articles towards the upcoming issue of respective journals.

Contact US