Purchase
Buy instant access (PDF download and unlimited online access):
Institutional Login
Log in with Open Athens, Shibboleth, or your institutional credentials
Personal login
Log in with your brill.com account
Gardner, A., Smith, A.L., Steventon, A., Coughlan, E., and Oldfield, M. (2022). Ethical funding for trustworthy AI: proposals to address the responsibility of funders to ensure that projects adhere to trustworthy AI practice. AI and Ethics 2, 277–291.
Herzog, C. (2022). On the risk of confusing interpretability with explicability. AI and Ethics 2, 219–225.
Starke, G., van den Brule, R., Elger, B.S., and Haselager, P. (2022). Intentional machines: A defence of trust in medical artificial intelligence. Bioethics 36, 154–161.
Winter, P.D., and Carusi, A. (2022). (De)troubling transparency: artificial intelligence (AI) for clinical applications. Medical Humanities. DOI: 10.1136/medhum-2021-012318.
Alvarado, R. (2022). What kind of trust does AI deserve, if any? AI and Ethics. DOI: 10.1007/s43681-022-00224-x.
Hallowell, N., Badger, S., Sauerbrei, A., Nellåker, C., and Kerasidou, A. (2022). “I don’t think people are ready to trust these algorithms at face value”. Trust and the use of machine learning algorithms in the diagnosis of rare disease. BMC Medical Ethics 23, 112. DOI: 10.1186/s12910-022-00842-4 [Open Access].
Hasani, N., Morris, M.A., Rhamim, A., Summers, R.M., Jones, E., Siegel, E., and Saboury, B. (2022). Trustworthy Artificial Intelligence in Medical Imaging. PET Clin 17 (1), 1–12. DOI: 10.1016/j.cpet.2021.09.007.
Kerasidou, C., Kerasidou, A., Buscher, M., and Wilkinson, S. (2022). Before and beyond trust. Reliance in medical AI. Journal of Medical Ethics 48 (11), 852–856. DOI: 10.1136/medethics-2020-107095 [Open Access].
Nickel, P.J. (2022). Trust in medical artificial intelligence. A discretionary account. Ethics and Information Technology 24, 7. DOI: 10.1007/s10676-022-09630-5 [Open Access].
Starke, G., and Ienca, M. (2022). Misplaced trust and distrust. How not to engage with medical artificial intelligence. Cambridge Quarterly of Healthcare Ethics. DOI: 10.1017/S0963180122000445 [Open Access].
Winter, P., and Carusi, A. (2022). ‘If you’re going to trust the machine, then that trust has got to be based on something’. Validation and the co-constitution of trust in developing artificial intelligence (AI) for the early diagnosis of pulmonary hypertension (PH). Science & Technology Studies 35 (4), 58–77. DOI: 10.23987/sts.102198 [Open Access].
Kiseleva, A., Kotzinos, D., and De Hert, P. (2022). Transparency of AI in healthcare as a multilayered system of accountabilities. Between legal requirements and technical limitations. Frontiers in Artificial Intelligence 5, 879603. DOI: 10.3389/frai.2022.879603.
Ott, T., and Dabrock, P. (2022). Transparent human – (non-)transparent technology? The Janus-faced call for transparency in AI-based health care technologies. Frontiers in Genetics 13, 902960. DOI: 10.3389/fgene.2022.902960 [Open Access].
Salahuddin, Z., Woodruff, H.C., Chatterjee, A., and Lambin, P. (2022). Transparency of deep neural networks for medical image analysis. A review of interpretability methods. Computers in Biology and Medicine 140, 105111. DOI: 10.1016/j.compbiomed.2021.105111 [Open Access].
Schmitz, R., Werner, R., Repici, A., Bisschops, R., Meining, A., Zornow, M., Messmann, H., Hassan, C., Sharma, P., and Rösch, T. (2022). Artificial intelligence in GI endoscopy. Stumbling blocks, gold standards and the role of endoscopy societies. Gut 71 (3), 451–454. DOI: 10.1136/gutjnl-2020-323115.
Amann, J., Vetter, D., Blomberg, S.N., Christensen, H.C., Coffee, M., Gerke, S., Gilbert, T.K., Hagendorff, T., Holm, S., Livne, M., Spezzatti, A., Strümke, I., Zicari, R.V., and Madai, V.I. (2022). To explain or not to explain? Artificial intelligence explainability in clinical decision support systems. PLOS Digital Health1 (2), e0000016. DOI: 10.1371/journal.pdig.0000016 [Open Access].
Arbelaez Ossa, L., Starke, G., Lorenzini, G., Vogt, J.E., Shaw, D.M., and Elger, B.S. (2022). Re-focusing explainability in medicine. Digital Health 8. DOI: 10.1177/20552076221074488 [Open Access].
Chen, H., Gomez, C., Huang, C.-M., and Unberath, M. (2022). Explainable medical imaging AI needs human-centered design. Guidelines and evidence from a systematic review. npj Digital Medicine 5, 156. DOI: 10.1038/s41746-022-00699-2 [Open Access].
Combi, C., Amico, B., Bellazzi, R., Holzinger, A., Moore, J.H., Zitnik, M., and Holmes, J.H. (2022). A manifesto on explainability for artificial intelligence in medicine. Artificial Intelligence in Medicine 133, 102423. DOI: 10.1016/j.artmed.2022.102423 [Open Access].
Funer, F. (2022). Accuracy and Interpretability: Struggling with the Epistemic Foundations of Machine Learning-Generated Medical Information and Their Practical Implications for the Doctor-Patient Relationship. Philosophy & Technology 35 (5). DOI: 10.1007/s13347-022-00505-7 [Open Access].
Funer, F. (2022). The Deception of Certainty: how Non-Interpretable Machine Learning Outcomes Challenge the Epistemic Authority of Physicians. A deliberative- relational Approach. Medicine, Health Care and Philosophy 25, 167–178. DOI: 10.1007/s11019-022-10076-1 [Open Access].
Hatherley, J., Sparrow, R., and Howard, M. (2022). The virtues of interpretable medical artificial intelligence. Cambridge Quarterly of Healthcare Ethics. DOI: 10.1017/S0963180122000305 [Open Access].
Herzog, C. (2022). On the ethical and epistemological utility of explicable AI in medicine. Philosophy & Technology 35, 50. DOI: 10.1007/s13347-022-00546-y. [Open Access]
Kawamleh, S. (2022). Against explainability requirements for ethical artificial intelligence in health care. AI and Ethics. DOI: 10.1007/s43681-022-00212-1.
Kempt, H., Freyer, N., and Nagel, S.K. (2022). Justice and the normative standards of explainability in healthcare. Philosophy & Technology 35, 100. DOI: 10.1007/s13347-022-00598-0 [Open Access].
Kempt, H., Heilinger, J.-C., and Nagel, S.K. (2022). Relative explainability and double standards in medical decision-making. Should medical AI be subjected to higher standards in medical decision-making than doctors? Ethics and Information Technology 24, 20. DOI: 10.1007/s10676-022-09646-x [Open Access].
McCoy, L.G., Brenna, C.T.A., Chen, S.S., Vold, K., and Das, S. (2022). Believing in black boxes. Machine learning for healthcare does not need explainability to be evidence- based. Journal of Clinical Epidemiology 142, 252–257. DOI: 10.1016/j.jclinepi.2021.11.001.
Petch, J., Di, S., and Nelson, W. (2022). Opening the black box. The promise and limitations of explainable machine learning in cardiology. Canadian Journal of Cardiology 38 (2), 204–213. DOI: 10.1016/j.cjca.2021.09.004 [Open Access].
Pierce, R.L., Van Biesen, W., Van Cauwenberge, D., Decruyenaere, J., and Sterckx, S. (2022). Explainability in medicine in an era of AI-based clinical decision support systems. Frontiers in Genetics 13, 903600. DOI: 10.3389/fgene.2022.903600 [Open Access].
Ratti, E., and Graves, M. (2022). Explainable machine learning practices. Opening another black box for reliable medical AI. AI and Ethics 2 (4), 801–814. DOI: 10.1007/s43681-022-00141-z [Open Access].
Ursin, F., Timmermann, C., and Steger, F. (2022). Explicability of artificial intelligence in radiology. Is a fifth bioethical principle conceptually necessary? Bioethics 36 (2), 143–153. DOI: 10.1111/bioe.12918 [Open Access].
Yoon, C.H., Torrance, R., and Scheinerman, N. (2022). Machine learning in medicine. Should the pursuit of enhanced interpretability be abandoned? Journal of Medical Ethics 48 (9), 581–585. DOI: 10.1136/medethics-2020-107102 [Open Access].
Friedrich, A.B., Mason, J., and Malone, J.R. (2022). Rethinking explainability. Toward a postphenomenology of black-box artificial intelligence in medicine. Ethics and Information Technology 24, 8. DOI: 10.1007/s10676-022-09631-4.
Pierce, R., Sterckx, S., and Van Biesen, W. (2022). A riddle, wrapped in a mystery, inside an enigma. How semantic black boxes and opaque artificial intelligence confuse medical decision-making. Bioethics 36 (2), 113–120. DOI: 10.1111/bioe.12924 [Open Access].
Quinn, T.P., Jacobs, S., Senadeera, M., Le, V., and Coghlan, S. (2022). The three ghosts of medical AI. Can the black-box present deliver? Artificial Intelligence in Medicine 124, 102158. DOI: 10.1016/j.artmed.2021.102158.
Wadden, J.J. (2022). Defining the undefinable. The black box problem in healthcare artificial intelligence. Journal of Medical Ethics 48 (10), 764–768. DOI: 10.1136/medethics-2021-107529.
Babushkina, D. (2022). Are we justified attributing a mistake in diagnosis to an AI diagnostic system? AI and Ethics. DOI: 10.1007/s43681-022-00189-x [Open Access].
Bleher, H., and Braun, M. (2022). Diffused responsibility. Attributions of responsibility in the use of AI-driven clinical decision support systems. AI and Ethics 2 (4), 747–761. DOI: 10.1007/s43681-022-00135-x [Open Access].
Sand, M., Durán, J.M., and Jongsma, K.R. (2022). Responsibility beyond design. Physicians’ requirements for ethical medical AI. Bioethics 36 (2), 162–169. DOI: 10.1111/bioe.12887 [Open Access].
Verdicchio, M., and Perin, A. (2022). When doctors and AI interact. On human responsibility for artificial risks. Philosophy & Technology 35, 11. DOI: 10.1007/s13347-022-00506-6 [Open Access].
All Time | Past 365 days | Past 30 Days | |
---|---|---|---|
Abstract Views | 910 | 517 | 68 |
Full Text Views | 519 | 46 | 5 |
PDF Views & Downloads | 1018 | 102 | 14 |
All Time | Past 365 days | Past 30 Days | |
---|---|---|---|
Abstract Views | 910 | 517 | 68 |
Full Text Views | 519 | 46 | 5 |
PDF Views & Downloads | 1018 | 102 | 14 |