
GenAI tools are developing rapidly – since the public release of ChatGPT in November 2022, a huge range of different tools and services have been developed.
With these rapid changes, it is important to keep yourself informed, and to understand how technological changes are likely to impact on law and the legal profession.
Listen to this audio, in which Francine Ryan provides an introduction to the course.

You can make notes in the box below.
Horizon scanning is a common phrase used in many industries and scientific endeavours. It refers to the process of identifying emerging trends, opportunities, and risks that are not yet fully visible but are approaching on the horizon.
This is a self-paced course of around 120-150 minutes including a short quiz.
A digital badge and a course certificate are awarded upon completion. To receive these, you must complete all sections of the course and pass the course quiz.
After completing this course, you will be able to:
Identify and utilise various sources to stay informed about Generative AI developments and assess the relevance and reliability of the information.
Explain the concept of AI literacy and understand its key components.
Evaluate your own AI literacy, identifying strengths and areas for improvement in your understanding and use of AI.
A glossary that covers all eight courses in this series is always available to you in the course menu on the left-hand side of this webpage.
The emergence of ChatGPT in 2022 took many by surprise. However, for those who had been closely following advancements in technology and research, the rapid development of rival tools and techniques unfolded in a fairly predictable way. Professionals who actively engaged in horizon scanning – monitoring trends to anticipate and prepare for future developments – were well-positioned to adopt and benefit from these new technologies early on.
As generative AI continues to evolve, horizon scanning remains a crucial practice. It enables us to explore an essential question – what does the future hold for GenAI, and how can we harness these advancements to enhance our work and stay ahead in a rapidly changing landscape?
Horizon scanning is a structured, forward-looking process used to explore potential future developments, trends, and disruptions. Its aim is to help organisations or individuals anticipate change, identify emerging opportunities, and manage potential risks. Horizon scanning enables organisations or individuals to stay ahead of the curve and respond proactively rather than reactively to rapidly changing environments.
Why horizon scanning matters
It supports informed, strategic decision-making.
It helps mitigate risks and capitalise on opportunities.
It promotes long-term thinking in a fast-changing world.
The horizon scanning process
Horizon scanning can be conducted in different ways, but one method is:
Data collection
Analysis
Scenario development
Risk assessment
Strategic planning and decision-making
In the next section, we explore different types of resources that can help you gather information to get a better understanding of the current landscape of GenAI.
There are many sources of information, not all of which should be trusted.
When thinking about whether the information is to be valued, you should consider the evidence presented, assess whether there are any other alternative arguments and explanations, and come to an informed conclusion.
This goes beyond the credibility of the source, although that can be important. You should also be persuaded by the strength of the evidence and the argument presented.
Questions you should ask yourself when reading a source of information include:
So how do you keep up with developments, news and trends in GenAI that may affect your work? There are a whole host of different resources available, all of which have their benefits and weaknesses.
One of the more common sources of information you will see regularly are press releases from commercial companies, often picked up in the popular press. This information can be very helpful in keeping up to date with what products or services are being launched, but it is important to remember that a company has a product to sell. This can lead to them ensuring any information has a positive spin put on it, or that negative aspects are underplayed.
One way of keeping up to date with commercial developments is to subscribe to the news coming from specific companies. For example, OpenAI is a world-leading generative AI company, with a YouTube channel that you can subscribe to, allowing you to keep up with the innovations they are developing.
However, maintaining an interest in all the individual commercial companies producing GenAI tools would be incredibly time-consuming. A different approach would be to keep an eye on commercial trade shows. For example, CES (formerly the Consumer Electronics Show) is an excellent place to explore what companies are developing for consumers. It is worth remembering that at these events, companies will be presenting the best interpretation of their innovations, which may not exactly match with how those innovations will work once they are commercialised.
Often mainstream news providers will send reporters to these events and provide analysis of the exhibitions. For example, this BBC report from CES 2024 discusses whether the hype of AI is leading to nonsense products.
Organisations like Association for Computing Machinery have great resources and there are a host of online technology magazines – such as ArsTechnica, WIRED, and techRadar – that produce articles by journalists who have some technical expertise. Substack and LinkedIn have lots of content, people and organisations you can follow who are writing about AI and law, as well as organisations like Thomson Reuters who provide reports and insights on how AI is impacting the profession and the legal it insider who inform on GenAI, AI and data analytics.
Watch this 14-minute video recorded in 2023 which talks about the possibility of using AI everywhere.
Consider the following points as you watch the video:
You might find it useful to do an Internet search about the company Humane and the speaker, cofounder Imran Chaudhri concentrating on more recent events.
Make some notes in the box below.
Do you think this is good AI? Is this a Ted Talk about the future of AI or is this about promoting the development of a product and an organisation. Having considered the questions above you might want to read this article The Humane AI, the year's biggest AI flop, has a silver lining | Laptop Mag

Research is published in academic articles, in either research journals or conference proceedings.
In the areas of robotics and AI systems, the two main bodies are the Association for Computing Machinery (ACM) and the Institute of Electrical and Electronics Engineers (IEEE).
Notable journals and conferences include the ACM Transactions on Intelligent Systems and Technology, the IEEE Transactions on Artificial Intelligence, and the AAAI Conference on Artificial Intelligence. All of these publish world-leading research developments.
If you are looking for a quick summary of how the field is changing, then the IEEE publishes an excellent magazine, Spectrum, where the online version allows you to narrow your interests to Artificial Intelligence and you can subscribe to receive updates.
Google Scholar provides access to journal articles and conference proceedings. In the areas of law and technology, British and Irish Law and Technology Association (BILETA) promotes research into technology law and policy for organisations, governments, professionals, students and the public. You can download publications from their website.
RAi UK (Responsible AI) brings together researchers from across the four nations of the UK to understand how to shape the development of AI to benefit people, communities and society. You can sign up for their newsletter, search their research publications and keep up to date with their different projects.
AdSolve is a UKRI project which addresses socio-technical limitations of Large Language Models (LLMs) in the context of medical and legal use cases. Their project will create an evaluation benchmark for assessing the limitations of LLMs to support and assess systems and this includes in a legal context. The outputs of their research will be of relevance to organisations who are deploying LLM tools within their work.
Spend five to ten minutes exploring the artificial intelligence section on the IEEE Spectrum website.
Think about the different types of innovations being reported on and the domains in which AI is being deployed.
Can you spot any patterns? If so, what are the patterns? If not, why do you think the innovations are so diverse? Make notes in the box below.
Depending on when you look at the artificial intelligence section will determine the articles that feature and what patterns may emerge. You might want to bookmark the section and re-visit to keep abreast of any changes and developments.
The adoption and use of AI systems are hugely influenced by societal factors. Therefore, it is important to think about how to influence and keep up to date with how society is responding to technical developments. In Course 7, you learnt about the legal regulation of AI, but you might also want to keep abreast of, for example, the Science, Innovation and Technology Committee, which has previously run an inquiry into the governance of artificial intelligence.
An alternative is to focus on reports written by the Parliamentary Office of Science and Technology. Part of parliament, their reports offer an independent, balanced and accessible analysis of science-related issues. For example, their report titled Use of artificial intelligence in education delivery and assessment is an interesting starting point regarding one significant concern about the common use of generative AI and its impact on society.
The Inter-Parliamentary Union, which the UK is a member of, is a global organisation of national parliaments. They have published guidelines for AI in parliaments in partnership with the Parliamentary Data Science Hub in IPU’s Centre for Innovation in Parliament.
The Guidelines for AI in parliaments offer a comprehensive framework for parliaments to understand and implement AI responsibly and effectively. They provide practical guidance on the importance of a strategic approach, strong governance, ethical considerations and risk management.
The Professional bodies provide a wealth of information on AI and its impacts on the legal sector.
The Law Society of Ireland has a library guide that provides links to all the guidance issued on the use of AI by the legal profession from the Law Society of Ireland, The Bar of Ireland, Law Society of England and Wales, Solicitors Regulation Authority, Courts and Tribunal Judiciary, The Bar Council, International Bar Council and American Bar Association.
In addition the Law Society of Scotland has a Guide to GenAI, CILEX has guidance and resources on AI and has an article on the impact of AI in the CILEX Journal.
Many organisations also bring together academia, non-profits, industry and media to think about how to develop AI solutions for good. They provide resources, research and facilitate engagement of stakeholders in this space.
Partnership on AI (PAI)
Partnership on AI is based in the US but has a global mission to bring together change makers to foster AI advances for people and society. There is a resources section on their website with blogs, impact stories and research and publications.
Cast
CAST helps organisations use digital for social good, accelerating their agency, presence and influence in the technologies that affect us all. In February 2025 CAST announced with Zoe Amar Digital (social enterprise and digital agency that works with charities and non-profits) that it was launching a UK charity task force to support responsible, inclusive and collaborative use of AI. CAST’s Director Dan Sutch explained that such collaboration is at the heart of the new collective’s mission:
“At CAST, we have seen time and again the proven power of networks to effect real change. And we know there’s a real appetite for connection across the sector with regard to AI: almost three-quarters of respondents to CAST’s 2024 AI survey expressed a strong need to link up with peers to discuss AI. That’s why we feel it is vital at this point for social sector organisations – and the supporting infrastructure – to come together and navigate a path through the rapidly-changing AI landscape. We know that if we can share challenges, identify opportunities, foster partnerships and advocate for support as one unified voice, our presence and influence within this new technology will be strengthened beyond measure.”
Network for Justice
Network for Justice supports over 1,000 individuals, organisations, initiatives and projects in the UK which share a common goal of supporting people to access and utilise their legal rights. You can join their network and as part of the network they facilitate the Justice Innovation Group which meets every 3 months to discuss innovative ways to support access to justice. You can also search their database of Justicetech tools.
Access to Justice Network (A2J) Network
The Access to Justice Network is based in the US but attracts a global audience of justice professionals to exchange questions, ideas, announcements, and resources. There are 5 main working groups: Tech & Forms, Legal Self-Help Websites, Access to Justice Research, Court-Based Assistance, and Law Libraries that have regular webinars from Autumn to early Summer. You can join the network by emailing: a2jnetwork@stanford.edu
There are many different ways to stay informed about AI, and the areas you choose to focus on may depend on whether your interest is personal or professional. In the next section, we will explore some emerging issues and reflect on the importance of AI literacy.
In Course 3, you had the opportunity to start conducting research into understanding how GenAI is impacting your sector.
You may want to re-visit this activity having learnt more about sources of information, the information that has been presented through the courses you have studied and the skills you have developed.
Research – understanding GenAI in your sector
It is important to understand how GenAI is currently impacting your sector. Use the prompts below to conduct research and collect insights.
| Focus area | Your notes / research findings |
| Current GenAI trends | What are the emerging GenAI applications in your sector? (e.g., document automation, triaging, case summaries) |
| Case studies | Are there examples of organisations in your sector successfully using GenAI? What are they doing? |
| Challenges and risks | What concerns are being raised? (e.g., accuracy, bias, data security, ethics, regulatory compliance) |
| Opportunities | What potential benefits could GenAI bring to your sector? (e.g., efficiency, creativity, cost reduction) |
| Competitor activity | How are your competitors or peer organisations adopting or experimenting with GenAI? |
| Regulations and policies | What are the legal, ethical, or regulatory frameworks relevant to GenAI use in your sector? |
Alternatively, you might choose one of the suggested topics in the following list. Using the guidance, you have learnt in this week’s study so far, spend 30 minutes doing some research at the relevant link provided. If there’s another area you’d rather investigate, focus on that!
Once you have completed your research, write a short summary in the text box below.
From your research you have probably seen there is a vast amount of discussion, and commentary on GenAI and it can feel quite overwhelming.
In the final part of this course we consider Gartner’s Hype Cycle, so it is perhaps worthwhile pausing and reflecting for a moment to acknowledge there is a lot of hype and speculation around AI. No one can accurately predict the future. It is important to be both cautious and critical – don't believe all the hype, be rigorous in your approach, be educated, ask questions, and challenges assumptions.
If you want to learn more about the history of tech bubbles read ‘Watching the Generative AI Hype Bubble Deflate’.
Now that you have conducted your research, you should have a deeper understanding of the AI landscape.
In the Further reading section (you can access this from the tab on the left-hand side menu on this page), we have added links to information that explores how the insights gathered through this exercise can be applied to support strategic planning and decision-making.

The evolution of AI means that GenAI tools are performing tasks that would normally require human intelligence, such as making decisions, generating texts and images and translating languages. This is having a profound impact on our society and brings both challenges and opportunities.
There are concerns about LLMs like ChatGPT because (as you learnt in Course 1) the outputs are often wrong or misleading. Furthermore, there is a growing concern that LLMs are now being trained on data they themselves have generated. Since LLMs produce web content, and new models are trained on web data, this creates a feedback loop where AI-generated content influences future AI training.
The extent to which LLMs are producing false information is now being tracked – you can access the FACTS Grounding benchmark, which ranks an LLM’s ability to generate factually accurate responses, and learn more about how the benchmark evaluates LLMs.
The fifth course in our series, Ethical and responsible use of Generative AI, discussed the research by Schneiders et al. (2024), which explored the impact of LLMs on legal advice and the legal profession. It demonstrated that when laypeople are presented with advice, where they do not know whether it has been provided by an LLM or a lawyer, they are more likely to act on the advice provided by the LLM.
Notwithstanding that LLMs hallucinate and provide incorrect information, the persuasiveness of the output can undermine our ability to critically assess its accuracy. There are concerns around the potential for us to become overdependent on GenAI tools, and what impact that has on our ability to develop foundational knowledge, which then reduces our ability to critically review outputs from LLMs.
There is therefore potential for these tools to erode the very idea of trust and there are important ethical questions around the displacement of humans by AI technologies. The increasing sophistication of AI tools means that there is greater potential for AI to help us think or think for us. If we are going to use AI, then it is incumbent on us to consider whether AI helps or hinders us in becoming wiser.
Click on each number below to reveal some questions you need to consider.
Watch this TED Talk video Madison Mohns: AI and the paradox of self-replacing workers.
Spend a few minutes reflecting on the presentation, think about which of the questions above concern you most and why? And what other questions you might have?
Make some notes in the box below.
These questions are not intended to cause alarm, however, if we are going to be using AI then we must guard against complacency. It is essential to be aware of the potential consequences of greater AI use and continually develop our knowledge to remain critically informed.

AI is rapidly progressing and the ability to scan the horizon for future applications of AI is critical to understand where advances in AI might impact on law and the delivery of legal services.
Horizon scanning can assist us in assessing the opportunities and risks that emerge from new developments. LLMs have the capability to undertake a wide range of tasks, but there are still limitations with the models such as hallucinations and biases. An area of growing interest is how we develop and achieve effective human and LLM collaboration (Passerini et al, 2025).
As we consider deeper integration of LLMs, we need to think about how human and machine intelligence interact.
At one end of the spectrum, humans retain full decision-making control – at the other, LLMs entirely replace human involvement. Faggioli et al. (2024) suggest a model for human and machine collaboration.
Click on each label below to learn more.
As LLMs continue to evolve, organisations must continually evaluate the human and AI interaction and, as discussed in Course 4 and 6, make sure there are systems in place to document changes. The rise of GenAI is driving the next major transformation – the emergence of AI agents. AI agents are different because they can complete tasks independently of humans (Kolt, 2025).
Yee et al. (2024) argue that they are the new frontier of GenAI:
“Broadly speaking, 'agentic' systems refer to digital systems that can independently interact in a dynamic world.
“While versions of these software systems have existed for years, the natural-language capabilities of gen AI unveil new possibilities, enabling systems that can plan their actions, use online tools to complete those tasks, collaborate with other agents and people, and learn to improve their performance.
“Gen AI agents eventually could act as skilled virtual coworkers, working with humans in a seamless and natural manner. A virtual assistant, for example, could plan and book a complex personalized travel itinerary, handling logistics across multiple travel platforms.
“Using everyday language, an engineer could describe a new software feature to a programmer agent, which would then code, test, iterate, and deploy the tool it helped create.”
Microsoft announced in October 2024 that it was introducing new agentic capabilities within Copilot, Google has Google Cloud Vertex AI Agent, Meta have developed CICERO, and all the other tech companies are developing AI agents.
AI agents rely on LLMs, so they give rise to the same risks of bias, hallucinations, and privacy issues which are likely to lead to new legal and ethical challenges.
With these rapid changes, it is important to keep yourself informed, and to understand how technological changes are likely to impact you as an individual and the organisation you work for.
Technology is moving at a rapid rate and to be able to operate in an AI-driven world requires an ongoing process of learning and developing your skills and knowledge.
AI literacy should be part of your continuing professional development, which comprises of understanding AI, and the opportunities, risks and challenges. In Course 7, you learnt about AI regulation, and in Article 4 of the EU AI Act there is a specific requirement for AI literacy for those who use AI systems as an employee.
“Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used.”
In the US, at the time of writing, there is draft legislation in progress that covers AI literacy; the AI Leadership Training Act. This bill requires the Office of Personnel Management (OPM) to develop and implement an annual training program on artificial intelligence (AI) for federal managers, supervisors, and other employees designated to participate in the program.
The program shall, at a minimum, include information relating to:
The Artificial Literacy Act 2023 (which amends the Digital Equity Act 2021) proposes that:
This bill provides for grants under the Digital Equity Competitive Grant Program to support training and programming on artificial intelligence in schools, colleges, and other community institutions. .....Under the bill, such grants may fund programs to support artificial intelligence literacy, including education on the basic principles, applications, and limitations of artificial intelligence.
In the UK, at the time of writing, there is no legal requirement for AI literacy, but the Artificial Intelligence Playbook for the UK Government published in February 2025 has ten principles. Two of these very specifically align with AI literacy: Principle 1: You know what AI is and what its limitations are and Principle 9: You have the skills and expertise needed to implement and use AI. The UK Government National AI Strategy covers AI literacy within skills for jobs.
AI literacy is therefore an important framework to critically understand, evaluate and use new technologies and one that requires a ‘self-reflective mindset’ (Chiu and Sanusi, 2024). It is recognised that AI is distinct from other digital technologies and requires its own set of competencies.
Many frameworks, like UNESCO, focus on supporting students and teachers to incorporate AI competencies in education. These frameworks aim to equip learners with the skills, knowledge and values to engage with AI effectively. UNESCO also published a report in 2023 that considered how the trends that are impacting on society are relevant to the future of education and the future of work. The report sets out the competencies for the future which include digital skills, but also other skills like emotional intelligence, creativity and critical thinking, which are all needed in human oversight of AI.
Drawing on the key elements that you have engaged with over the eight courses, we have set out a framework below that we suggest reflects the competencies required to critically engage with GenAI in legal contexts.
Click on each label below to learn more.
Rate your confidence in each of these AI literacy areas on a scale of 1–5.
All responses are completely anonymous.
Based on your self-assessment, you might want to create some goals, for example:
I will read one article about AI ethics this week.
I will experiment with an AI tool and reflect on its applications.
I will learn more about AI regulation.
You might want to create a learning journal to track the development of your knowledge and understanding of AI.
This course has considered the importance of horizon scanning. It has discussed different sources of information for keeping up to date with GenAI developments, and it has highlighted the importance of AI literacy.
Change appears to be happening at such a rapid rate and there is so much media hype and fear around AI that it can feel quite overwhelming.
The Gartner Hype Cycle maps the five stages of a technology’s life cycle:
Click on each button below to learn more.
If you are interested, you can view two diagramatic versions of the Gartner Hype Cycle in this supplementary course content:
People will be at different stages but many of us might be somewhere between stage 2 and stage 3. At this stage it probably feels quite messy, so we need to be realistic about what we can do: try to carve out some time to learn about AI, explore the tools, and develop your understanding. Even if you don’t want to use GenAI, it is important to recognise that AI is being integrated into existing technologies, so it is essential to understand the implications of the technology and the risks and challenges.
Thank you for taking part in this course. We hope you found the eight courses both interesting and valuable, and you now have a deeper understanding of GenAI in the context of law, the advice sector and the legal profession.
While this marks the end of our courses, we hope it is just the beginning of your learning journey. AI is rapidly transforming society, and a greater awareness of its legal and ethical implications is essential to ensure it benefits all aspects of our lives. We encourage you to stay curious, continue exploring, and actively engage in the AI discourse.
When you are ready, you can move on to the Course 8 quiz.
Here is a useful list of the key website links used in the learning content of this course.
AAAI Conference on Artificial Intelligence
Addressing Socio-technical Limitations of LLMs for Medical and Social Computing
AI and the Legal Profession: Professional Guidance
Association for Computing Machinery
Artificial Intelligence Literacy Act of 2023
BBC – CES 2024: AI pillows and toothbrushes - is it all getting a bit silly?
Centre for Innovation in Parliament
Defiant Microsoft pushes ahead with controversial Recall – though as an opt-in
Digital due diligence: A practical guide to AI and ethics in the legal profession
Empowering And Connecting The Responsible AI Ecosystem
FACTS Grounding: A new benchmark for evaluating the factuality of large language models
Future of Professionals Report
Governance of artificial intelligence (AI)
Guidelines for AI in parliaments
How AI is transforming the legal profession (2025)
How to learn about the future? | Your ultimate horizon scanning protocol
IEEE Transactions on Artificial Intelligence
Meet Codex CLI—an open-source local coding agent that turns natural language into working code.
Navigating the AI landscape: A guide for charities on opportunities, risks and compliance
OpenAI –What can I help you with?
Risk Or Revolution: Will AI Replace Lawyers?
Sourcing reliable and impartial scientific research for Parliament
Taking a responsible path to AGI
Training lawyers for the age of AI
Ultimate Horizon Scanning Guide: how to spot trends early
Use of artificial intelligence in education delivery and assessment
What you need to know about UNESCO's new AI competency frameworks for students and teachers
Artificial Intelligence Playbook for the UK Government (2025). Available at: https://www.gov.uk/government/publications/ai-playbook-for-the-uk-government/artificial-intelligence-playbook-for-the-uk-government-html (Accessed: 28 April 2025).
Chiu, T. and Sanusi, I. (2024) ‘Define, foster, and assess student and teacher AI literacy and competency for all: Current status and future research direction’, Computers and Education Open, Volume 7. Available at: https://doi.org/10.1016/j.caeo.2024.100182 (Accessed: 28 April 2025).
Congress.Gov (2023) 'H.R.6791 - Artificial Intelligence Literacy Act of 2023', 118th Congress (2023-2024). Available at: https://www.congress.gov/bill/118th-congress/house-bill/6791. (Accessed: 28 April 2025).
EU Artificial Intelligence Act (2025) Article 4: AI Literacy. Available at: https://artificialintelligenceact.eu/article/4/ (Accessed: 28 April 2025).
Faggioli, G., Dietz, L., Clarke, C., Demartini, G., Hagen, M., Hauff, C., Kando, N., Kanoulas, E., Potthast, M., Stein, B., and Wachsmuth, H. (2024) ‘Who determines what is relevant? Humans or AI? Why not both?’, Communications of the Association for Computing Machinery, 64(4), pp. 34–37. Available at: https://doi.org/10.1145/36247 (Accessed: 28 April 2025).
Gartner (2025) Gartner Hype Cycle. Available at: https://www.gartner.com/en/research/methodologies/gartner-hype-cycle (Accessed: 28 April 2025).
Kolt, N. (2025) Governing AI Agents. Available at: https://doi.org/10.48550/arXiv.2501.07913 (Accessed: 28 April 2025).
118th Congress (2023-2024) AI Leadership Training Act (2023, November 2). Available at: https://www.congress.gov/bill/118th-congress/senate-bill/1564 (Accessed: 28 April 2025).
Passerini A., Gema, A., Minervini P., Sayin. B and Tentori K. (2025) ‘Fostering effective hybrid human-LLM reasoning and decision making’, Frontiers in Artificial Intelligence. Available at: https://doi.org/10.3389/frai.2024.1464690 (Accessed: 9 May 2025).
Yee, L., Chui, M. and Roberts, R. (2024) 'Why agents are the next frontier of generative AI', McKinsey Digital. Available at: https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/why-agents-are-the-next-frontier-of-generative-ai (Accessed: 28 April 2025).
Grateful acknowledgement is made to the following sources:
Every effort has been made to contact copyright holders. If any have been inadvertently overlooked the publishers will be pleased to make the necessary arrangements at the first opportunity.
Important: *** against any of the acknowledgements below means that the wording has been dictated by the rights holder/publisher, and cannot be changed.
552250: VectorMine / shutterstock
552264: Lichtwolke / shutterstock
557486: Francine Ryan
552266: markmags / pixabay
553335: © 2023 CAST
552268: Freder / Getty
552269: SvetaZi / shutterstock
552270: geralt / pixabay
557504: By Jeremykemp at English Wikipedia, CC BY-SA 3.0This file is licensed under the Creative Commons Attribution-Non-commercial-Share Alike Licence https://creativecommons.org/licenses/by-sa/3.0/deed.en
557506: By Olga Tarkovskiy - Own work, CC BY-SA 3.0 This file is licensed under the Creative Commons Attribution-Non-commercial-Share Alike Licence https://creativecommons.org/licenses/by-sa/3.0/deed.en