5G校园网——5G教育春风化雨

2020年6月15日

5G文学网 5G校园网 联合出版

本报独家特稿:校园扫描

2020年6月14日报道

恭喜新加坡女生蔡逸婷荣获

剑桥大学法律论文奖第二名

2020年6月13日,英国剑桥大学三一学院在网上公布了纪念罗伯特 • 沃克法律论文奖 (Robert Walker Prize for Essays in Law) 得奖名单,新加坡华侨中学高中部女生蔡逸婷,在竞争激烈的国际大赛中脱颖而出,荣获国际组第二名。

三一学院(Trinity College, Cambridge)是剑桥大学闻名于世的法学院。今年刚迈入第8年的法律论文奖征文赛,是为了鼓励对法律有兴趣的学生,对现代社会重要的法律课题进行研究、思考及阐述论点。

今年的主题是“人工智能能否取代人类解决法律纠纷?”(Should legal disputes be determined by artificial, rather than human means?)。可能是因为疫情宅家的关系,本届参赛者是历年来最多的一次,共有175名来自世界各地的参赛者角逐奖项,逸婷对自己获奖感到欣喜,她说:“能为新加坡和华侨中学争光,我深感荣幸。”

逸婷是《5G文学网》 与《5G校园网》网站总监蔡志礼博士的女儿。蔡博士与太太叶慧萍接获喜讯,自然格外欣喜,他说:

“逸婷之前因为担心名落孙山,所以写了论文后偷偷呈交,没让我和慧萍知道。接到剑桥突然传来捷报,吓了我们一跳!她得奖后也保持低调,没让学校知道!”

目前在修读人文科特选课程的逸婷,从小就手不释卷,阅读了近万本书籍,对文学、历史和法律的兴趣尤为浓厚。她曾多次参加模拟联合国竞赛并得奖,培养了她对法律及国际关系的兴趣。

前几年她曾先后在耶鲁大学举办的世界学者杯竞赛中,从来自世界各国1200多名学生中脱颖而出,获得 2016年初级组个人挑战赛第 3 名,2017年高级组全场第 6 名。

英国剑桥大学三一学院

蔡逸婷与爸爸和妈妈合照

蔡逸婷

Trinity College,Cambridge
ROBERT WALKER PRIZE-WINNERS 2020:
Second Prize (International Division)
Annabelle Chua (Hwa Chong Institution, Singapore)

Should legal disputes be determined by artificial,

rather than human means?

From the manufacturing sector, to the online shopping industry, to our personal speakers and smartphones, artificial intelligence has infiltrated almost every aspect of human life.1 The growth of this technological phenomenon has led some to argue that AI should be taken a step further. Being theoretically impartial entities with none of the predisposition to human error, they appear to be obvious tool for the settlement of legal disputes. However, should artificial means truly replace humans at the decision-making stage?

Before tackling this question, it is important to set some parameters. Artificial means, which will be taken to be synonymous with AI, is defined as any machine or program “which exhibits traits associated with a human mind, such as learning and problem solving”.2 This broad definition generally includes all computers capable of machine learning.

As for legal disputes, complaints made in court or a similar legal process, can be resolved in three main ways — mediation, arbitration, and litigation. Only the latter two generally involve an official, jury, or judge rendering a final, legally-binding decision, and therefore will be this essay’s focus.3 According to Justice Robert J. Sharpe, the quality of a legal decision can be judged not only by how well the relevant legal norms have been interpreted and employed, but also the degree to which the judge has considered the context of the dispute. In addition, the opinion needs to be appropriately justified, such that “the losing [party knows] that the judge actually understood and grappled with the issues [they brought up]”.4 Practical concerns, such as the time and resources necessary for parties to reach their resolution, need also be considered. As such, legal decision-makers, both artificial and human, will be evaluated on three criteria: fairness, trustworthiness, and efficiency.

Before considering whether AI should, let us consider whether AI can. AI development is widely regarded to only be in its “narrow” stage, that is, consisting of machines which are have been taught to perform a single, specific task, without being explicitly programmed to do so.For an AI to be capable of passing judgements, it would have to be able to assess the evidence presented, understand the perspectives of dispute stakeholders, and interpret necessary laws. This would then have to be written out in the form of a cogent judicial opinion. So far, even one of the most flexible and advanced AI systems available, the GPT-2, which has learned to perform a myriad of writing-related tasks, remains incapable of producing coherent text.6 It is clear that current AI technology cannot yet produce a machine which could come close to serving as an adequate substitute for humans, let alone a preferable one.

Of course, AI is a rapidly growing field, and the question cannot be considered only within the constraints of status quo. With the future of AI still up in the air, a straightforward yes or no cannot be given. Instead, the preconditions that must be met for them to be a preferable substitute, and their feasibility, must be considered.

Firstly, on the tenet of fairness. On the surface, machines appear to be a lot less susceptible to being swayed than humans. Judges and jury members can be bribed, blackmailed, or threatened.7 In certain countries, the possibility of electoral defeat may affect judicial officers’ decisions, especially for verdicts related to the death penalty.8 However, this ignores the possibility that computers are susceptible to hacking and tampering. The Centre for Strategic & International Studies has reported 23 significant cyber-attacks on various governments since the beginning of 2020 alone.9 When Estonia attempted to move its government services online, a vulnerability in its system almost caused 1.3 million citizens’ identification cards to be leaked.10 Whilst not affected by the promise of personal gain, machines are uniquely vulnerable to hostile takeovers, and the integrity of verdicts may likewise be affected.

On a deeper level, there is the problem of hidden bias affecting verdict fairness. Numerous psychologists agree that a human’s worldview is strongly impacted by their culture, upbringing, and social norms.11 As such, so long as humans are in charge of determining legal disputes, resolutions cannot be entirely fair and free of bias. However, AI systems do not solve this problem. The American Bar Association has warned its members that AI may “produce results that are materially inaccurate or discriminatory” as a result of flawed inputs.12 Research by Propublica discovered that algorithms used to predict criminal recidivism rates were more likely to predict that black Americans would re-offend than their white counterparts, regardless of the offenders’ individual criminal history.13 Therefore, artificial means may not entirely be divorced from human ones, and may act as a proxy for human biases to seep into the judicial system without contest. On the human end, however, biases can be monitored and ameliorated upon identification.14 Awareness of potential prejudices therefore improves the quality of verdicts, as opposed to the false sense of complacency AI may offer. Fairness also increases when different opinions are expressed and discussed, as in juries.15 Therefore, humans’ ability to reflect and independently learn from their mistakes continue to make them preferable to AI when deciding legal disputes.

However, fairness is not enough; plaintiffs and defendants also need to be reasonably assured that the judge has adequately addressed their concerns, and have trust in the system. Here, humans clearly have the upper hand. To be sure, 47% of those surveyed in the United Kingdom have stated that they felt that the UK justice system was unfair, but this pales in comparison to the 75% who, in a different survey, noted that they would not trust a decision made by an AI system with regards to an applicant’s suitability for a bank loan.16 17 Juries fare even better; less than 42% of Americans surveyed believed that juries unfair all or most of the time.18 Whether by the virtues of collaborative decision-making, or the clearly-expressed opinions of a single person, both systems offer a sense of logic, trustworthiness, and transparency that AI does not. This problem is exacerbated by the difficulty in reverse-engineering the process required for machines to learn, making it difficult to root out causation-correlation fallacies when determining the validity of an AI-decided resolution.19 Such opaqueness makes appeals, a legal cornerstone in ensuring all parties are adequately represented, near impossible. Therefore, unless public opinion on AI is radically shifted, and a way of transparently presenting an AI system’s internal train of logic is presented, humans remain the preferable option.

Lastly, there is the question of efficiency. Here, it is clear why so many have become proponents of using AI. Capable of processing large chunks of information at breakneck speeds, examining every detail without the threat of human error, AI has already been deployed in various legal start-ups to conduct contract review, with some arguing that it could negate the need for paralegals very soon.20 China and Estonia have both launched some form of AI-enabled judge. The former, which has a system based on WeChat, has already handled almost 120,000 different cases.21 The mobile court has mainly been deployed to consider cyber-related crimes, and could potentially reach the millions who live without easy access to the country’s courts. However, there is a catch. All major decisions are made by human judges, making the machines simply aides. 22 While AI might be faster, there are clearly some key advances that need to be made before humanity is able to trust it to make life-changing decisions.

In conclusion, unless a hack-proof, self-aware, and transparent AI system is developed, legal decision-making should still rest in human hands. As unlikely as such a system sounds, it may not be impossible. New investigations in Seldonian algorithms, for instance, have successfully curbed gender bias in grade point average predictions by explicitly programming the computer to recognise sexism as an undesirable result, suggesting bias can be mitigated over time.23 The answer to whether AI can replace humans as our judges and juries is not no, it is simply, for the foreseeable future, not yet.

  1. “13 Industries Soon To Be Revolutionized By Artificial Intelligence.” Forbes. Forbes Magazine, January 29, 2019. https://www.forbes.com/sites/forbestechcouncil/2019/01/16/13-industries-soon-to-be-revolutionized-by-artificial-intelligence/#2385feea3dc1.

  2. Frankenfield, Jake. “How Artificial Intelligence Works.” Investopedia. Investopedia, March 13, 2020. https://www.investopedia.com/terms/a/artificial-intelligence-ai.asp.

  3. W. August, Thomas. “Choose the Right Dispute Resolution Process.” PON, January 29, 2020. https://www.pon.harvard.edu/daily/dispute-resolution/choose-the-right-dispute-resolution-process/.

  4. Sharpe, Robert J. May 9, 2016. https://www.fljs.org/content/synopsis-how-judges-decide-lecture.

  5. Heath, Nick. “What Is AI? Everything You Need to Know about Artificial Intelligence.” ZDNet. ZDNet, February 12, 2018. https://www.zdnet.com/article/what-is-ai-everything-you-need-to-know-about-artificial-intelligence/.

  6. Vincent, James. “OpenAI’s New Multitalented AI Writes, Translates, and Slanders.” The Verge. The Verge, February 14, 2019. https://www.theverge.com/2019/2/14/18224704/ai-machine-learning-language-models-read-write-openai-gpt2.

  7. Underhill, Jordan. “Bribery on the Bench: A Look at Judicial Corruption.” The Fraud Examiner. The Association of Certified Fraud Examiners. Accessed April 13, 2020. https://www.acfe.com/fraud-examiner.aspx?id=4294994669.

  8. Kritzer, Herbert M. “Impact of Judicial Elections on Judicial Decisions.” Annual Review of Law and Social Science 12 (October 2016): 353–71. https://www.annualreviews.org/doi/full/10.1146/annurev-lawsocsci-110615-084812.

  9. “Significant Cyber Incidents.” Significant Cyber Incidents | Center for Strategic and International Studies. Accessed April 13, 2020. https://www.csis.org/programs/technology-policy-program/significant-cyber-incidents.

  10. Niiler, Eric. “Can AI Be a Fair Judge in Court? Estonia Thinks So.” Wired. Conde Nast. Accessed April 13, 2020. https://www.wired.com/story/can-ai-be-fair-judge-court-estonia-thinks-so/.

  11. Maskowitz, Gordon. “Are We All Inherently Biased?” Leigh University, February 20, 2019. https://www1.lehigh.edu/research/consequence/are-we-all-inherently-biased.

  12. “Examining Technology Bias: Do Algorithms Introduce Ethical & Legal Challenges?” American Bar Association. Accessed April 13, 2020. https://www.americanbar.org/groups/business_law/publications/blt/2019/04/bias/.

  13. Angwin, Julia, Jeff Larson, Surya Mattu, and Lauren Kirchner. “Machine Bias.” ProPublica, May 23, 2016. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

  14. “Test Yourself for Hidden Bias.” Teaching Tolerance. Accessed April 13, 2020. https://www.tolerance.org/professional-development/test-yourself-for-hidden-bias.

  15. Fleming, Nic. “Everyone Is Biased, Including You: the Play Designed by Neuroscientists.” The Guardian. Guardian News and Media, January 12, 2019. https://www.theguardian.com/science/2019/jan/12/psychology-of-group-reasoning-versus-individual.

  16. “Almost Half of the British People Don’t Believe the Current UK Justice System Is Fair.” Legal Cheek, June 7, 2016. https://www.legalcheek.com/2016/06/almost-half-of-the-british-people-dont-believe-the-current-uk-justice-system-is-fair/.

  17. “Report Shows Consumers Don’t Trust Artificial Intelligence.” Fintech News, December 4, 2019. https://www.fintechnews.org/report-shows-consumers-dont-trust-artificial-intelligence/.

  18. Lee, Eugene. “Harris Poll: 3 of 5 Americans Believe Juries Fair.” California Labor and Employment Law, February 6, 2008. https://calaborlaw.com/harris-poll-3-out-5-americans-believe-juries-fair/.

  19. Lehnis, Marianne. “Can We Trust AI If We Don’t Know How It Works?” BBC News. BBC, June 15, 2018. https://www.bbc.com/news/business-44466213.

  20. Toews, Rob. “AI Will Transform The Field Of Law.” Forbes. Forbes Magazine, December 19, 2019. https://www.forbes.com/sites/robtoews/2019/12/19/ai-will-transform-the-field-of-law/#39c2fc2f7f01.

  21. https://www.japantimes.co.jp/news/2019/12/07/asia-pacific/crime-legal-asia-pacific/ai-judges-verdicts-via-chat-app-brave-new-world-chinas-digital-courts/#.XpNBDVMzZQI
  22. Ibid.

  23. Dockrill, Peter. “Can We Force AIs to Be Fair Towards People? Scientists Just Invented a Way.” ScienceAlert. Accessed April 13, 2020. https://www.sciencealert.com/how-can-we-trust-intelligent-machines-to-be-fair-scientists-just-invented-a-way.

剑桥大学三一学院
2020年罗伯特·沃克(Robert Walker)法律论文奖:
国际组第二名 蔡逸婷(新加坡华侨中学高中部)

Should legal disputes be determined by artificial,

rather than human means?

蔡志礼中译

法律纠纷应否由人工智能,而不是人为裁决?

从制造业到网购行业,再到我们的个人扬声器和智能手机,人工智能几乎已经渗透到人类生活的每个层面。1 这种科技的发展现象让一些人认为,人工智能应该走得更远。作为理论上公正的实体,没有人为错误的诱因,人工智能似乎是解决法律纠纷的明显工具。然而,在裁决阶段,人工智能方式应否取代人类?

在解决这个问题之前,有必要先考虑一些关键因素。人工智能手段(将被视为人工智能的代名词)被定义为“具有与人的心灵相关的特征,例如学习和解决问题的能力”的任何机器或程序。2 此广义定义通常包括所有能够进行机器学习的电脑。

至于法律纠纷,在法院或类似法律程序中提出的投诉,可通过三种主要方式解决:调解、仲裁和诉讼。后两者通常涉及官员、陪审团或法官做出最终具有法律约束力的裁定,因此,这也是本文的着重点。3 据罗伯特 · J · 夏普法官说,判断法律裁决的质量,不仅可以从相关法律规范的解释和采用程度,还可以根据法官对争议背景的考虑程度。此外,呈词必须适当且合理化,以使 “败诉的当事人知道,法官实际上理解并解决了他们提出的问题。” 4 还要考虑实际问题,例如当事人为了解决问题所需的时间和资源。因此,本文将根据公平性、可信赖性和效率这三个标准,对人工智能和人为的法律裁决进行评估。

在考虑是否应该使用人工智能之前,让我们先考虑人工智能是否有此能力。人们普遍认为,人工智能的发展仅处于“狭窄”阶段,即由被教导执行单个特定任务的机器组成,而没有经过明确的编程来做到这一点。5 通过判断,它就必须能够评估所提供的证据,了解争议涉众的观点并解释相关必要的法律。然后必须以有说服力的司法呈词的形式写出来。到目前为止,即使GPT-2是目前可用的最灵活最先进的人工智能系统之一,已经学会执行许多与写作有关的任务,但仍然无法生成连贯的文本。6 显然,当前的人工智能技术尚不能生产一种机器,几乎可以替代人类,更不用说是一种更好的机器了。

当然,人工智能是一个快速发展的领域,不能仅在现状的限制内考虑这个问题。由于人工智能的未来仍然悬而未决,因此无法给出简单的是或否的答案。取而代之的是,必须考虑使其成为更好的替代品,必须满足的先决条件及其可行性。

首先,关于公平的宗旨。从表面上看,机器似乎比人更不容易受到动摇。法官和陪审团成员可能被贿赂,勒索或威胁。7 在某些国家/地区,选举失败的可能性可能会影响司法官员的决定,特别是对于与死刑有关的判决。8 但是,我们不能忽略了电脑容易受到骇客攻击和篡改的可能性。战略与国际研究中心报告说,仅从2020年开始,不同政府就遭遇23次严重的网络攻击。9 当爱沙尼亚试图将其政府服务转移到网上时,其系统中的漏洞几乎导致130万公民的身份证被盗。 10 虽然不受个人收益承诺的影响,但因机器抵御攻击能力的脆弱性,判决的完整性也因此受到影响。

从更深的层次来看,存在着隐藏偏见的问题,这会影响判决的公正性。许多心理学家一致认为,人类的世界观会受到其文化、养育和社会规范的强烈影响。11 因此,只要人类负责裁定法律纠纷,解决方案就不可能完全公平且没有偏见。但是,人工智能系统也无法解决此问题。美国律师协会警告其成员,由于输入错误,人工智能可能会“产生实质上不准确或有歧视性的结果”。12 网络新闻网Propublica的研究发现,用于预测重犯率的算法,无论罪犯的个人犯罪记录如何,预测的结果黑人比白人重犯的可能性更高。13 所以,在不公正性上,人工智能可能与人为的裁决没什么区别,而且代替人为的偏见渗入司法系统而不被怀疑。但是,从人的角度出发,可以通过识别来监控和缓解偏见。14 因此,对潜在的偏见有所警惕,可以提高判决的质量,这与人工智能可能会产生的自满心态相反。就像在陪审团中一样,当表达和讨论不同的意见时,公平性也会增加。15 因此,人们在裁定法律纠纷时,具备反思和独立学习错误的能力,将继续使人的判决优于人工智能。

但是,仅凭公平是不够的。还必须合理地向原告和被告人保证,法官已充分解除了他们的疑虑,并让他们信任法律制度。这方面人类显然占了上风。可以肯定的是,在英国接受调查的受访者中,有47%表示他们认为英国的司法制度不公平,但这与75%的受访者相比显得相形见绌,后者在另一项调查中指出,他们不信任人工智能系统对向银行申请贷款者的评估。16 17 陪审团获得的评价更高;不到42%的美国人认为,陪审团在所有或大部分时间都是不公平的。18 无论是通过协作决策的优点,还是一个人明确表达的意见,这两种系统都具有逻辑感,可信赖性,以及人工智能所没有的透明度。对机器学习所需的过程进行逆向工程的困难,使该问题更加严重,这使得在确定人工智能裁决是否有效时,难以消除因果相关的谬误。19 这种不透明使得上诉有了法律的基础。确保各方都有足够的代表,几乎是不可能的。因此,除非从根本上改变对人工智能的公众意见,并提出一种透明呈现人工智能系统内部逻辑的方法,否则人类仍然是首选。

最后,还有效率问题。在这里,很明显为什么这么多人成为使用人工智能的拥护者,因为它能以惊人的速度处理大量信息,检查每个细节而没有人为错误的威胁。各种法律初创公司已部署了人工智审核合同,有些人还认为有了人工智能,可能很快就不需要律师助理了。20 中国和爱沙尼亚都启动了某种形式的人工智能法官,前者拥有以微信为基础的系统,已处理了将近12万个案件。21 移动法庭主要用于处理与网络相关的犯罪,并且可为数百万因住在偏远地方而无法轻易进入法院的人服务。但是这里有个要诀,所有重大决定都是由人类法官做出的,人工智能机器只是助手。 22 尽管人工智能可能会更快速,但在人类能够信任它做出改变命运的决定之前,显然需要取得一些关键性的进步。

总而言之,除非开发出防黑客攻击,具自我意识和透明的人工智能系统,否则法律决策仍应掌握在人的手中。尽管听起来不太像会这样的系统,但并非不可能。例如,通过对Seldonian算法进行的新研究,通过对电脑进行显式编程以将性别歧视视为不良结果,可以成功地缓解了平均成绩预测中的性别偏见,表明偏见可以随着时间的流逝而减轻。23 关于人工智能是否可以替代法官和陪审团判决,答案是否定的。在可预见的未来也还没有这种可能。

注释:

  1. “13 Industries Soon To Be Revolutionized By Artificial Intelligence.” Forbes. Forbes Magazine, January 29, 2019. https://www.forbes.com/sites/forbestechcouncil/2019/01/16/13-industries-soon-to-be-revolutionized-by-artificial-intelligence/#2385feea3dc1.

  2. Frankenfield, Jake. “How Artificial Intelligence Works.” Investopedia. Investopedia, March 13, 2020. https://www.investopedia.com/terms/a/artificial-intelligence-ai.asp.

  3. W. August, Thomas. “Choose the Right Dispute Resolution Process.” PON, January 29, 2020. https://www.pon.harvard.edu/daily/dispute-resolution/choose-the-right-dispute-resolution-process/.

  4. Sharpe, Robert J. May 9, 2016. https://www.fljs.org/content/synopsis-how-judges-decide-lecture.

  5. Heath, Nick. “What Is AI? Everything You Need to Know about Artificial Intelligence.” ZDNet. ZDNet, February 12, 2018. https://www.zdnet.com/article/what-is-ai-everything-you-need-to-know-about-artificial-intelligence/.

  6. Vincent, James. “OpenAI’s New Multitalented AI Writes, Translates, and Slanders.” The Verge. The Verge, February 14, 2019. https://www.theverge.com/2019/2/14/18224704/ai-machine-learning-language-models-read-write-openai-gpt2.

  7. Underhill, Jordan. “Bribery on the Bench: A Look at Judicial Corruption.” The Fraud Examiner. The Association of Certified Fraud Examiners. Accessed April 13, 2020. https://www.acfe.com/fraud-examiner.aspx?id=4294994669.

  8. Kritzer, Herbert M. “Impact of Judicial Elections on Judicial Decisions.” Annual Review of Law and Social Science 12 (October 2016): 353–71. https://www.annualreviews.org/doi/full/10.1146/annurev-lawsocsci-110615-084812.

  9. “Significant Cyber Incidents.” Significant Cyber Incidents | Center for Strategic and International Studies. Accessed April 13, 2020. https://www.csis.org/programs/technology-policy-program/significant-cyber-incidents.

  10. Niiler, Eric. “Can AI Be a Fair Judge in Court? Estonia Thinks So.” Wired. Conde Nast. Accessed April 13, 2020. https://www.wired.com/story/can-ai-be-fair-judge-court-estonia-thinks-so/.

  11. Maskowitz, Gordon. “Are We All Inherently Biased?” Leigh University, February 20, 2019. https://www1.lehigh.edu/research/consequence/are-we-all-inherently-biased.

  12. “Examining Technology Bias: Do Algorithms Introduce Ethical & Legal Challenges?” American Bar Association. Accessed April 13, 2020. https://www.americanbar.org/groups/business_law/publications/blt/2019/04/bias/.

  13. Angwin, Julia, Jeff Larson, Surya Mattu, and Lauren Kirchner. “Machine Bias.” ProPublica, May 23, 2016. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

  14. “Test Yourself for Hidden Bias.” Teaching Tolerance. Accessed April 13, 2020. https://www.tolerance.org/professional-development/test-yourself-for-hidden-bias.

  15. Fleming, Nic. “Everyone Is Biased, Including You: the Play Designed by Neuroscientists.” The Guardian. Guardian News and Media, January 12, 2019. https://www.theguardian.com/science/2019/jan/12/psychology-of-group-reasoning-versus-individual.

  16. “Almost Half of the British People Don’t Believe the Current UK Justice System Is Fair.” Legal Cheek, June 7, 2016. https://www.legalcheek.com/2016/06/almost-half-of-the-british-people-dont-believe-the-current-uk-justice-system-is-fair/.

  17. “Report Shows Consumers Don’t Trust Artificial Intelligence.” Fintech News, December 4, 2019. https://www.fintechnews.org/report-shows-consumers-dont-trust-artificial-intelligence/.

  18. Lee, Eugene. “Harris Poll: 3 of 5 Americans Believe Juries Fair.” California Labor and Employment Law, February 6, 2008. https://calaborlaw.com/harris-poll-3-out-5-americans-believe-juries-fair/.

  19. Lehnis, Marianne. “Can We Trust AI If We Don’t Know How It Works?” BBC News. BBC, June 15, 2018. https://www.bbc.com/news/business-44466213.

  20. Toews, Rob. “AI Will Transform The Field Of Law.” Forbes. Forbes Magazine, December 19, 2019. https://www.forbes.com/sites/robtoews/2019/12/19/ai-will-transform-the-field-of-law/#39c2fc2f7f01.

  21. https://www.japantimes.co.jp/news/2019/12/07/asia-pacific/crime-legal-asia-pacific/ai-judges-verdicts-via-chat-app-brave-new-world-chinas-digital-courts/#.XpNBDVMzZQI
  22. Ibid.

  23. Dockrill, Peter. “Can We Force AIs to Be Fair Towards People? Scientists Just Invented a Way.” ScienceAlert. Accessed April 13, 2020. https://www.sciencealert.com/how-can-we-trust-intelligent-machines-to-be-fair-scientists-just-invented-a-way.

恭贺蔡志礼博士伉俪令嫒

华侨中学高中部蔡逸婷同学

荣获剑桥大学三一学院

罗伯特 • 沃克法律论文奖

(Robert Walker Prize for Essays in Law)

国际组第二名

李前南老师
5G文学网 编辑部
5G校园网 编辑部
5G电子藏书阁 编辑部
5G电子报 编辑部
敬贺
2020年6月15日

版权所有 All rights reserved © 2020 - 5gsg.net | 5gsgedu.net

5G电子报