About Us
AI technology has seen large advances in recent years, and many experts expect this will continue. It is possible that we see highly capable and powerful AI systems during our lifetimes. We believe there are still fundamental questions to answer and technical challenges to address to ensure that these powerful AI systems are beneficial, rather than harmful to humanity.
AI Safety Hub Edinburgh (AISHED) is a community of people interested in ensuring that the development of artificial intelligence benefits humanity’s long-term future. We are based in Edinburgh, but act as a hub for surrounding areas. Our main goals include:
- Helping hub members gain the skills, knowledge, and experience required to contribute to AI safety research
- Providing a forum for exchange of ideas on AI safety topics
- Facilitating AI safety research within the hub
Contact
Get in touch via email at [email protected], or message an organiser on Discord.
Join our Community
Our primary channel is our Discord Server. You can also sign up to our mailing list to keep up to date with events. Feel free to come along to one of our advertised events to meet us in person.
Events
We run a range of events. See our calendar, and descriptions of various events below.
Discussion Group
We run a weekly discussion group, discussing a different aspect of AI safety every week. These meetings take place on Tuesday evenings at 19:00 in 50 George Square, room 3.30.
Past Events
We often record our speaker events. Links to past events are found below:
- [2024-03-28] Yoshua Bengio — Why and How could we Design Aligned and Provably Safe AI?
- [2023-11-24] Laura Weidinger & Verena Rieser — Evaluating Social and Ethical Risks from Generative AI
- [2023-10-19] Andy Zou — Representation Engineering: A Top Down Approach to AI Transparency
- [2023-09-14] Patrick Butlin — Consciousness in AI: Insights from the Science of Consciousness
- [2023-08-17] Dami Choi — Tools for Verifying Neural Models' Training Data
- [2023-08-03] Jacob Andreas — Automatic Understanding of Deep Networks with Natural Language Descriptions
- [2023-06-02] Dan Hendrycks — Surveying AI Safety Research Directions
- [2023-05-18] Jacob Hilton — Mechanistic Anomaly Detection
- [2023-04-06] Sören Mindermann — AI Alignment: A Deep Learning Perspective
- [2023-03-16] Jacob Steinhardt — Aligning ML Systems with Human Intent
- [2023-02-24] Sam Bowman — What’s the Deal with AI Safety? Motivations & Open Problems
- [2023-02-17] Rory Greig — Aligning Dialogue Agents via Targeted Human Judgment
- [2023-02-03] Victoria Krakovna — Paradigms of AI Alignment: Components and Enablers
- [2023-01-27] Anders Sandberg — Wireheading: Risks from Hacking Reward Systems
- [2022-10-28] David Krueger — Can We Get Deep Learning Systems to Generalize Safely