Hi there 👋

We’re genwro.AI, an AI research group at WrocÅ‚aw University of Science and Technology.
In this blog you can find:
• latest updates on what we’re researching and building,
• regular “What We’re Reading” posts where our team shares interesting papers and reports,
• tutorials and explanations of ML concepts we work with,
• tips and tricks we’ve learned along the way - useful whether you’re just starting or already in the field.

Introduction to Counterfactual Explanations

What If AI Could Rewrite Its Decisions? In a world increasingly shaped by algorithmic decisions, a fundamental question emerges: when AI makes a decision about us, how can we understand why and — more importantly — what we could change to get a different outcome? Explainable AI (XAI) is all about answering those questions, pulling back the curtain from black-box algorithms. But while most XAI tools tell you what went wrong, one approach — counterfactual explanations (CFEs) — goes further. CFEs don’t just tell you why an AI decided what it did, but shows you how that decision could be different. ...

March 1, 2025 Â· 5 min Â· Marcin Kostrzewa

Welcome Post

Hey there! Welcome to the genwro.AI Blog 🚀 We’re thrilled to kick off our research blog! Since you’re here, let us give you the scoop on who we are and what we’re planning to share. Meet genwro.AI We are a research group from Wrocław University of Science and Technology established in 2019. Our research spans several key areas in artificial intelligence and machine learning: generative models: developing and advancing state-of-the-art architectures for content generation and synthesis, 3D representation learning: exploring novel approaches to understanding and representing three-dimensional structures in machine learning systems, few-shot learning: investigating methods to enable effective learning from limited data samples, uncertainty estimation: advancing techniques for reliable uncertainty quantification and out-of-distribution detection in ML models, computer vision: focusing on image segmentation, restoration, and video generation with particular emphasis on real-world applications, explainable AI: developing methods for interpretable machine learning, with emphasis on counterfactual explanations and model transparency, probabilistic models: investigating Bayesian approaches and probabilistic frameworks for robust machine learning systems. We’re not working in isolation — we’re part of a broader research community, collaborating with amazing teams from places like University of Cambridge, Imperial College London, ETH Zurich, and several other fantastic institutions. ...

February 2, 2025 Â· 3 min