Are you over 18 and want to see adult content?
More Annotations
![Alba Ciudad 96.3 FM – Radio emisora del Ministerio del Poder Popular para la Cultura](https://www.archivebay.com/archive/4daaeb7f-76f6-4ded-b907-a983468a2d5a.png)
Alba Ciudad 96.3 FM – Radio emisora del Ministerio del Poder Popular para la Cultura
Are you over 18 and want to see adult content?
![Fork in the Road | Good for You + Good for the Planet](https://www.archivebay.com/archive/212d9258-5411-4436-a6fa-ef47a9c450a6.png)
Fork in the Road | Good for You + Good for the Planet
Are you over 18 and want to see adult content?
![The Budget Diet - Earn, Save & Live Smart.](https://www.archivebay.com/archive/e16aefe3-0c25-48eb-b5bd-006455f42c69.png)
The Budget Diet - Earn, Save & Live Smart.
Are you over 18 and want to see adult content?
Favourite Annotations
![A complete backup of https://pocketrev.com](https://www.archivebay.com/archive6/images/5b4395b2-6c0f-4146-9716-bfd9ba3250c6.png)
A complete backup of https://pocketrev.com
Are you over 18 and want to see adult content?
![A complete backup of https://bricoleurdudimanche.com](https://www.archivebay.com/archive6/images/cba8cd90-29f9-4e29-beeb-a4b3edd6ec79.png)
A complete backup of https://bricoleurdudimanche.com
Are you over 18 and want to see adult content?
![A complete backup of https://dailystormer.name](https://www.archivebay.com/archive6/images/aaa2ab2f-bdee-4d14-a998-9361d5b7c49e.png)
A complete backup of https://dailystormer.name
Are you over 18 and want to see adult content?
![A complete backup of https://suse.com](https://www.archivebay.com/archive6/images/d0874ec8-1cab-4a0d-b228-38f1ad2f5b3b.png)
A complete backup of https://suse.com
Are you over 18 and want to see adult content?
![A complete backup of https://ampalestine.org](https://www.archivebay.com/archive6/images/58af5a4e-f75d-49de-b664-e420c34a5040.png)
A complete backup of https://ampalestine.org
Are you over 18 and want to see adult content?
![A complete backup of https://cfm.org.br](https://www.archivebay.com/archive6/images/1d33a7c8-1665-467c-b0d6-e2abf6b42fc8.png)
A complete backup of https://cfm.org.br
Are you over 18 and want to see adult content?
![A complete backup of https://tremblantactivities.com](https://www.archivebay.com/archive6/images/b1399071-d804-47ba-90cf-b458a0a8cb94.png)
A complete backup of https://tremblantactivities.com
Are you over 18 and want to see adult content?
![A complete backup of https://mecklenburger-radtour.de](https://www.archivebay.com/archive6/images/52efbee3-6795-4257-ad9d-0be9339bdb90.png)
A complete backup of https://mecklenburger-radtour.de
Are you over 18 and want to see adult content?
![A complete backup of https://medishop.ro](https://www.archivebay.com/archive6/images/b0be64ef-7230-45e6-b3ac-50672aff4e07.png)
A complete backup of https://medishop.ro
Are you over 18 and want to see adult content?
![A complete backup of https://lhlic.com](https://www.archivebay.com/archive6/images/6af48eb8-ecbb-43c3-bac3-dca5d33cdea1.png)
A complete backup of https://lhlic.com
Are you over 18 and want to see adult content?
![A complete backup of https://batesfootwear.com](https://www.archivebay.com/archive6/images/50d9b104-6289-43e6-8773-a44c84b4236f.png)
A complete backup of https://batesfootwear.com
Are you over 18 and want to see adult content?
![A complete backup of https://rcid.org](https://www.archivebay.com/archive6/images/7893855f-14ed-453a-bf05-1b09f1519c4e.png)
A complete backup of https://rcid.org
Are you over 18 and want to see adult content?
Text
EF CORE IN DEPTH
But in EF Core 5 there is a really nice Fluent API called HasPrecision (9,2), which is easier. 4. Avoid expression body properties with EF Core. In a normal class having a property where it has code (referred to as expression body definition) as shown below is the right thing todo.
EVOLVING MODULAR MONOLITHS: 2. BREAKING UP YOUR APP INTO This is the second article in a series about building Microsoft .NET applications using a modular monolith architecture. This article covers a way to extract parts of your application into separate solutions which you turn into NuGet packages that are installed in your main application. Each solution is physically separated from each other in a similar way to Microservice architecture, but GENERICSERVICES: A LIBRARY TO PROVIDE CRUD FRONT-END This article is about a NuGet library designed to make building Create, Read, Update and Delete (CRUD) web pages quicker to write. GenericServices acts as an adapter and command pattern between EF Core and your web/mobile/desktop application. This article describes why this is useful and how it can save you development time. IS THE REPOSITORY PATTERN USEFUL WITH ENTITY FRAMEWORK This is a series: Part 1: Analysing whether Repository pattern useful with Entity Framework (this article).Part 2: Four months on – my solution to replacing the Repository pattern. UPDATE (2018): Big re-write to take into account Entity Framework Core, and further learning. I have just finished the first release of Spatial Modeller™, a medium sized ASP.NET MVC web application. BUILDING A ROBUST CQRS DATABASE WITH EF CORE AND COSMOS DB This application was built with EF Core 2.2 and the Microsoft.EntityFrameworkCore.Cosmos package 2.2.0-preview3-35497 . This is a very early version of Cosmos DB support in EF Core with some limitations on this application. They are. This Cosmos DB preview is ENTITY FRAMEWORK CORE Hi Jon, I’m not aware of any downside apart from the fact that the interface is marked as: This is an internal API that supports the Entity Framework Core infrastructure and not subject to the same compatibility standards as public APIs. IS THE REPOSITORY PATTERN USEFUL WITH ENTITY FRAMEWORK I wrote my first article about the repository pattern in 2014, and it is still a popular post. This is an updated article that takes account of a) the release of Entity Framework Core (EF Core) and b) further investigations of different EF Core database access patterns. WRAPPING YOUR BUSINESS LOGIC WITH ANTI-CORRUPTION LAYERS Wrapping your business logic with anti-corruption layers – NET Core. There is a concept in Domain-Drive Design (DDD) called the anti-corruption layer which, according to Microsoft explanation of an anti-corruption layer “Implements a façade or adapter layer between different subsystems that don’t share the same semantics”. THE REFORMED PROGRAMMER The Reformed Programmer – I am a freelance .NET Core back-end developer. Evolving modular monoliths: 3. Passing data between bounded contexts. Last Updated: May 17, 2021 | Created: May 17, 2021. This article describes the different ways you can pass data between isolated sections of your code, known in DDD as bounded contexts. HOW TO TAKE AN ASP.NET MVC WEB SITE “DOWN FOR MAINTENANCE I am working on an ASP.NET MVC5 e-commerce site and my focus is on how to apply database migrations to such a site (see previous articles on this subject). I have decided that for complex database changes I will take the web site “Down for maintenance” while I make the changes. EVOLVING MODULAR MONOLITHS: 2. BREAKING UP YOUR APP INTO This is the second article in a series about building Microsoft .NET applications using a modular monolith architecture. This article covers a way to extract parts of your application into separate solutions which you turn into NuGet packages that are installed in your main application. Each solution is physically separated from each other in a similar way to Microservice architecture, but NEW FEATURES FOR UNIT TESTING YOUR ENTITY FRAMEWORK CORE 5 The new features in EF Core 5 that help with unit testing. I am going to cover four features that have changes in the EfCore.TestSupport library, either because of new features in EF Core 5, or improvements that have been added to the library. Creating a ENTITY FRAMEWORK: WORKING WITH AN EXISTING DATABASE ADDING USER IMPERSONATION TO AN ASP.NET CORE WEBSEE MORENEW CONTENT WILL BE ADDED ABOVE THE CURRENT AREA OF FOCUS UPON SELECTIONSEE MORE ON THEREFORMEDPROGRAMMER.NET ASP.NET CORE RAZOR PAGES: HOW TO IMPLEMENT AJAX REQUESTS ASP.NET Core 2.0 introduced a new way to build a web site, called Razor Pages.I really like Razor Pages, but I needed to work out how to do a few things. In this article I A ROBUST EVENT-DRIVEN ARCHITECTURE FOR USING WITH ENTITYSEE MORE ON THEREFORMEDPROGRAMMER.NET PART 4: BUILDING A ROBUST AND SECURE DATA AUTHORIZATIONSEE MORENEW CONTENT WILL BE ADDED ABOVE THE CURRENT AREA OF FOCUS UPON SELECTIONSEE MORE ON THEREFORMEDPROGRAMMER.NET FLATTENING ENTITY FRAMEWORK RELATIONSHIPS WITHSEE MORE ON THEREFORMEDPROGRAMMER.NET THE REFORMED PROGRAMMER The Reformed Programmer – I am a freelance .NET Core back-end developer. Evolving modular monoliths: 3. Passing data between bounded contexts. Last Updated: May 17, 2021 | Created: May 17, 2021. This article describes the different ways you can pass data between isolated sections of your code, known in DDD as bounded contexts. HOW TO TAKE AN ASP.NET MVC WEB SITE “DOWN FOR MAINTENANCE I am working on an ASP.NET MVC5 e-commerce site and my focus is on how to apply database migrations to such a site (see previous articles on this subject). I have decided that for complex database changes I will take the web site “Down for maintenance” while I make the changes. EVOLVING MODULAR MONOLITHS: 2. BREAKING UP YOUR APP INTO This is the second article in a series about building Microsoft .NET applications using a modular monolith architecture. This article covers a way to extract parts of your application into separate solutions which you turn into NuGet packages that are installed in your main application. Each solution is physically separated from each other in a similar way to Microservice architecture, but NEW FEATURES FOR UNIT TESTING YOUR ENTITY FRAMEWORK CORE 5 The new features in EF Core 5 that help with unit testing. I am going to cover four features that have changes in the EfCore.TestSupport library, either because of new features in EF Core 5, or improvements that have been added to the library. Creating a ENTITY FRAMEWORK: WORKING WITH AN EXISTING DATABASE ADDING USER IMPERSONATION TO AN ASP.NET CORE WEBSEE MORENEW CONTENT WILL BE ADDED ABOVE THE CURRENT AREA OF FOCUS UPON SELECTIONSEE MORE ON THEREFORMEDPROGRAMMER.NET ASP.NET CORE RAZOR PAGES: HOW TO IMPLEMENT AJAX REQUESTS ASP.NET Core 2.0 introduced a new way to build a web site, called Razor Pages.I really like Razor Pages, but I needed to work out how to do a few things. In this article I A ROBUST EVENT-DRIVEN ARCHITECTURE FOR USING WITH ENTITYSEE MORE ON THEREFORMEDPROGRAMMER.NET PART 4: BUILDING A ROBUST AND SECURE DATA AUTHORIZATIONSEE MORENEW CONTENT WILL BE ADDED ABOVE THE CURRENT AREA OF FOCUS UPON SELECTIONSEE MORE ON THEREFORMEDPROGRAMMER.NET FLATTENING ENTITY FRAMEWORK RELATIONSHIPS WITHSEE MORE ON THEREFORMEDPROGRAMMER.NET THE REFORMED PROGRAMMER This is a companion article to the EF Core Community Standup called “Performance tuning an EF Core app” where I apply a series of performance enhancements to a demo ASP.NET Core e-commerce book selling site called the Book App. I start with 700 books, then 100,000 books and finally ½ million books. This article, plus the EF Core Community Standup video, pulls information from chapters HOW TO TAKE AN ASP.NET MVC WEB SITE “DOWN FOR MAINTENANCE I am working on an ASP.NET MVC5 e-commerce site and my focus is on how to apply database migrations to such a site (see previous articles on this subject). I have decided that for complex database changes I will take the web site “Down for maintenance” while I make the changes.EF CORE IN DEPTH
But in EF Core 5 there is a really nice Fluent API called HasPrecision (9,2), which is easier. 4. Avoid expression body properties with EF Core. In a normal class having a property where it has code (referred to as expression body definition) as shown below is the right thing todo.
IS THE REPOSITORY PATTERN USEFUL WITH ENTITY FRAMEWORK This is a series: Part 1: Analysing whether Repository pattern useful with Entity Framework (this article).Part 2: Four months on – my solution to replacing the Repository pattern. UPDATE (2018): Big re-write to take into account Entity Framework Core, and further learning. I have just finished the first release of Spatial Modeller™, a medium sized ASP.NET MVC web application. UNIT TESTING REACT COMPONENTS THAT USE REDUX The connect () (InnerConnect) on the last line adds the property dispatch to the props. Normal use of Redux by a component would include more Redux code. Redux’s documentation on Writing Tests suggests you export the React component class as well as the default, which should be the Class used (decorated by) in a connect () method. GENERICSERVICES: A LIBRARY TO PROVIDE CRUD FRONT-END This article is about a NuGet library designed to make building Create, Read, Update and Delete (CRUD) web pages quicker to write. GenericServices acts as an adapter and command pattern between EF Core and your web/mobile/desktop application. This article describes why this is useful and how it can save you development time. IS THE REPOSITORY PATTERN USEFUL WITH ENTITY FRAMEWORK I wrote my first article about the repository pattern in 2014, and it is still a popular post. This is an updated article that takes account of a) the release of Entity Framework Core (EF Core) and b) further investigations of different EF Core database access patterns. BUILDING A ROBUST CQRS DATABASE WITH EF CORE AND COSMOS DB This application was built with EF Core 2.2 and the Microsoft.EntityFrameworkCore.Cosmos package 2.2.0-preview3-35497 . This is a very early version of Cosmos DB support in EF Core with some limitations on this application. They are. This Cosmos DB preview is ENTITY FRAMEWORK CORE Hi Jon, I’m not aware of any downside apart from the fact that the interface is marked as: This is an internal API that supports the Entity Framework Core infrastructure and not subject to the same compatibility standards as public APIs. WRAPPING YOUR BUSINESS LOGIC WITH ANTI-CORRUPTION LAYERS Wrapping your business logic with anti-corruption layers – NET Core. There is a concept in Domain-Drive Design (DDD) called the anti-corruption layer which, according to Microsoft explanation of an anti-corruption layer “Implements a façade or adapter layer between different subsystems that don’t share the same semantics”. THE REFORMED PROGRAMMER The Reformed Programmer – I am a freelance .NET Core back-end developer. Evolving modular monoliths: 3. Passing data between bounded contexts. Last Updated: May 17, 2021 | Created: May 17, 2021. This article describes the different ways you can pass data between isolated sections of your code, known in DDD as bounded contexts. PART 1: A BETTER WAY TO HANDLE AUTHORIZATION IN ASP.NET Part 3: A better way to handle authorization – six months on. Part 4: Building robust and secure data authorization with EF Core. Part 5: A better way to handle authorization – refreshing user’s claims. Part 6: Adding user impersonation to an ASP.NET Core web application. HOW TO TAKE AN ASP.NET MVC WEB SITE “DOWN FOR MAINTENANCE I am working on an ASP.NET MVC5 e-commerce site and my focus is on how to apply database migrations to such a site (see previous articles on this subject). I have decided that for complex database changes I will take the web site “Down for maintenance” while I make the changes. EVOLVING MODULAR MONOLITHS: 2. BREAKING UP YOUR APP INTO This is the second article in a series about building Microsoft .NET applications using a modular monolith architecture. This article covers a way to extract parts of your application into separate solutions which you turn into NuGet packages that are installed in your main application. Each solution is physically separated from each other in a similar way to Microservice architecture, but ENTITY FRAMEWORK: WORKING WITH AN EXISTING DATABASE NEW FEATURES FOR UNIT TESTING YOUR ENTITY FRAMEWORK CORE 5 The new features in EF Core 5 that help with unit testing. I am going to cover four features that have changes in the EfCore.TestSupport library, either because of new features in EF Core 5, or improvements that have been added to the library. Creating a ADDING USER IMPERSONATION TO AN ASP.NET CORE WEBSEE MORENEW CONTENT WILL BE ADDED ABOVE THE CURRENT AREA OF FOCUS UPON SELECTIONSEE MORE ON THEREFORMEDPROGRAMMER.NET UPDATING MANY-TO-MANY RELATIONSHIPS IN EF CORE 5 AND ABOVESEE MORE ON THEREFORMEDPROGRAMMER.NET IS THE REPOSITORY PATTERN USEFUL WITH ENTITY FRAMEWORKSEE MORE ON THEREFORMEDPROGRAMMER.NET ASP.NET CORE RAZOR PAGES: HOW TO IMPLEMENT AJAX REQUESTS ASP.NET Core 2.0 introduced a new way to build a web site, called Razor Pages.I really like Razor Pages, but I needed to work out how to do a few things. In this article I THE REFORMED PROGRAMMER The Reformed Programmer – I am a freelance .NET Core back-end developer. Evolving modular monoliths: 3. Passing data between bounded contexts. Last Updated: May 17, 2021 | Created: May 17, 2021. This article describes the different ways you can pass data between isolated sections of your code, known in DDD as bounded contexts. PART 1: A BETTER WAY TO HANDLE AUTHORIZATION IN ASP.NET Part 3: A better way to handle authorization – six months on. Part 4: Building robust and secure data authorization with EF Core. Part 5: A better way to handle authorization – refreshing user’s claims. Part 6: Adding user impersonation to an ASP.NET Core web application. HOW TO TAKE AN ASP.NET MVC WEB SITE “DOWN FOR MAINTENANCE I am working on an ASP.NET MVC5 e-commerce site and my focus is on how to apply database migrations to such a site (see previous articles on this subject). I have decided that for complex database changes I will take the web site “Down for maintenance” while I make the changes. EVOLVING MODULAR MONOLITHS: 2. BREAKING UP YOUR APP INTO This is the second article in a series about building Microsoft .NET applications using a modular monolith architecture. This article covers a way to extract parts of your application into separate solutions which you turn into NuGet packages that are installed in your main application. Each solution is physically separated from each other in a similar way to Microservice architecture, but ENTITY FRAMEWORK: WORKING WITH AN EXISTING DATABASE NEW FEATURES FOR UNIT TESTING YOUR ENTITY FRAMEWORK CORE 5 The new features in EF Core 5 that help with unit testing. I am going to cover four features that have changes in the EfCore.TestSupport library, either because of new features in EF Core 5, or improvements that have been added to the library. Creating a ADDING USER IMPERSONATION TO AN ASP.NET CORE WEBSEE MORENEW CONTENT WILL BE ADDED ABOVE THE CURRENT AREA OF FOCUS UPON SELECTIONSEE MORE ON THEREFORMEDPROGRAMMER.NET UPDATING MANY-TO-MANY RELATIONSHIPS IN EF CORE 5 AND ABOVESEE MORE ON THEREFORMEDPROGRAMMER.NET IS THE REPOSITORY PATTERN USEFUL WITH ENTITY FRAMEWORKSEE MORE ON THEREFORMEDPROGRAMMER.NET ASP.NET CORE RAZOR PAGES: HOW TO IMPLEMENT AJAX REQUESTS ASP.NET Core 2.0 introduced a new way to build a web site, called Razor Pages.I really like Razor Pages, but I needed to work out how to do a few things. In this article I HOW TO TAKE AN ASP.NET MVC WEB SITE “DOWN FOR MAINTENANCE I am working on an ASP.NET MVC5 e-commerce site and my focus is on how to apply database migrations to such a site (see previous articles on this subject). I have decided that for complex database changes I will take the web site “Down for maintenance” while I make the changes.EF CORE IN DEPTH
But in EF Core 5 there is a really nice Fluent API called HasPrecision (9,2), which is easier. 4. Avoid expression body properties with EF Core. In a normal class having a property where it has code (referred to as expression body definition) as shown below is the right thing todo.
IS THE REPOSITORY PATTERN USEFUL WITH ENTITY FRAMEWORK This is a series: Part 1: Analysing whether Repository pattern useful with Entity Framework (this article).Part 2: Four months on – my solution to replacing the Repository pattern. UPDATE (2018): Big re-write to take into account Entity Framework Core, and further learning. I have just finished the first release of Spatial Modeller™, a medium sized ASP.NET MVC web application.ASP.NET CORE
Microsoft’s documentation says “ASP.NET Core is designed from the ground up to support and leverage dependency injection”. It also says that “Dependency injection (DI) is a technique for achieving loose coupling between objects and their collaborators, or dependencies.” (read Martin Fowler’s article for the in-depth coverage of DI).. I have used DI for years and I love it – it ASP.NET CORE RAZOR PAGES: HOW TO IMPLEMENT AJAX REQUESTS ASP.NET Core 2.0 introduced a new way to build a web site, called Razor Pages.I really like Razor Pages, but I needed to work out how to do a few things. In this article I HOW TO UPDATE A DATABASE’S SCHEMA WITHOUT USING EF CORE’S This article is aimed at developers that want to use EF Core to access the database but want complete control over their database schema. I decided to write this article after seeing the EF Core Community standup covering the EF Core 6.0 Survey Results.In that video there was a page looking at the ways people deploy changes to production (link to video at that point), and quite a few PART 4: BUILDING A ROBUST AND SECURE DATA AUTHORIZATION This article covers how to implement data authorization using Entity Framework Core (EF Core), that is only returning data from a database that the current user is allowed to access. The article focuses on how to implement data authorization in such a way that the filtering is very secure, i.e. the filter always works, and the architecture is robust, i.e. the design of the filtering doesn’t A TECHNIQUE FOR BUILDING HIGH-PERFORMANCE DATABASES WITH A technique for building high-performance databases with EF Core. As the writer of the book “ Entity Framework Core in Action ” I get asked to build, or fix, applications using Entity Framework Core (EF Core) to be “fast”. Typically, “fast” means the database queries (reads) should return quickly, which in turn improves the SIX THINGS I LEARNT ABOUT USING ASP.NET CORE’S RAZOR PAGES Six things I learnt about using ASP.NET Core’s Razor Pages. ASP.NET Core 2.0 introduced a new way to build a web site, called Razor Pages. I was interested in the new Razor Pages approach, as I hoped Razor Pages would allow me to code better by following the SOLID principals – and they do. But first I had to work out how to use Razor Pages. HANDLING ENTITY FRAMEWORK CORE DATABASE MIGRATIONS IN I decided to write a series about the different ways you can safely migrate a database, i.e. change the database’s schema, using Entity Framework Core (EF Core). This is the first part of the series, which looks how to create a migration, while part 2 deals how to apply a migration to a database, specifically a production database. THE REFORMED PROGRAMMER The Reformed Programmer – I am a freelance .NET Core back-end developer. Evolving modular monoliths: 3. Passing data between bounded contexts. Last Updated: May 17, 2021 | Created: May 17, 2021. This article describes the different ways you can pass data between isolated sections of your code, known in DDD as bounded contexts. HOW TO TAKE AN ASP.NET MVC WEB SITE “DOWN FOR MAINTENANCEASP NET CORE 3 1ASP NET CORE SAMPLE APP I am working on an ASP.NET MVC5 e-commerce site and my focus is on how to apply database migrations to such a site (see previous articles on this subject). I have decided that for complex database changes I will take the web site “Down for maintenance” while I make the changes. PART 1: A BETTER WAY TO HANDLE AUTHORIZATION IN ASP.NET Part 3: A better way to handle authorization – six months on. Part 4: Building robust and secure data authorization with EF Core. Part 5: A better way to handle authorization – refreshing user’s claims. Part 6: Adding user impersonation to an ASP.NET Core web application. SIX WAYS TO BUILD BETTER ENTITY FRAMEWORK (CORE AND EF6 The six principles and patterns are: Separation of concerns – building the right architecture. The Service Layer – separating data actions from presentation action. Repositories – picking the right sort of database access pattern. Dependency injection – turning your database code into services. ASP.NET CORE RAZOR PAGES: HOW TO IMPLEMENT AJAX REQUESTS ASP.NET Core 2.0 introduced a new way to build a web site, called Razor Pages.I really like Razor Pages, but I needed to work out how to do a few things. In this article I UPDATING MANY-TO-MANY RELATIONSHIPS IN EF CORE 5 AND ABOVESEE MORE ON THEREFORMEDPROGRAMMER.NET ADDING USER IMPERSONATION TO AN ASP.NET CORE WEBSEE MORENEW CONTENT WILL BE ADDED ABOVE THE CURRENT AREA OF FOCUS UPON SELECTIONSEE MORE ON THEREFORMEDPROGRAMMER.NET BUILDING HIGH PERFORMANCE DATABASE QUERIES USING ENTITYSEE MORENEW CONTENT WILL BE ADDED ABOVE THE CURRENT AREA OF FOCUS UPON SELECTIONSEE MORE ON THEREFORMEDPROGRAMMER.NET PART 4: BUILDING A ROBUST AND SECURE DATA AUTHORIZATIONSEE MORENEW CONTENT WILL BE ADDED ABOVE THE CURRENT AREA OF FOCUS UPON SELECTIONSEE MORE ON THEREFORMEDPROGRAMMER.NET IS THE REPOSITORY PATTERN USEFUL WITH ENTITY FRAMEWORKSEE MORE ON THEREFORMEDPROGRAMMER.NET THE REFORMED PROGRAMMER The Reformed Programmer – I am a freelance .NET Core back-end developer. Evolving modular monoliths: 3. Passing data between bounded contexts. Last Updated: May 17, 2021 | Created: May 17, 2021. This article describes the different ways you can pass data between isolated sections of your code, known in DDD as bounded contexts. HOW TO TAKE AN ASP.NET MVC WEB SITE “DOWN FOR MAINTENANCEASP NET CORE 3 1ASP NET CORE SAMPLE APP I am working on an ASP.NET MVC5 e-commerce site and my focus is on how to apply database migrations to such a site (see previous articles on this subject). I have decided that for complex database changes I will take the web site “Down for maintenance” while I make the changes. PART 1: A BETTER WAY TO HANDLE AUTHORIZATION IN ASP.NET Part 3: A better way to handle authorization – six months on. Part 4: Building robust and secure data authorization with EF Core. Part 5: A better way to handle authorization – refreshing user’s claims. Part 6: Adding user impersonation to an ASP.NET Core web application. ASP.NET CORE RAZOR PAGES: HOW TO IMPLEMENT AJAX REQUESTS ASP.NET Core 2.0 introduced a new way to build a web site, called Razor Pages.I really like Razor Pages, but I needed to work out how to do a few things. In this article I SIX WAYS TO BUILD BETTER ENTITY FRAMEWORK (CORE AND EF6 The six principles and patterns are: Separation of concerns – building the right architecture. The Service Layer – separating data actions from presentation action. Repositories – picking the right sort of database access pattern. Dependency injection – turning your database code into services. ADDING USER IMPERSONATION TO AN ASP.NET CORE WEBSEE MORENEW CONTENT WILL BE ADDED ABOVE THE CURRENT AREA OF FOCUS UPON SELECTIONSEE MORE ON THEREFORMEDPROGRAMMER.NET UPDATING MANY-TO-MANY RELATIONSHIPS IN EF CORE 5 AND ABOVESEE MORE ON THEREFORMEDPROGRAMMER.NET BUILDING HIGH PERFORMANCE DATABASE QUERIES USING ENTITYSEE MORENEW CONTENT WILL BE ADDED ABOVE THE CURRENT AREA OF FOCUS UPON SELECTIONSEE MORE ON THEREFORMEDPROGRAMMER.NET PART 4: BUILDING A ROBUST AND SECURE DATA AUTHORIZATIONSEE MORENEW CONTENT WILL BE ADDED ABOVE THE CURRENT AREA OF FOCUS UPON SELECTIONSEE MORE ON THEREFORMEDPROGRAMMER.NET IS THE REPOSITORY PATTERN USEFUL WITH ENTITY FRAMEWORKSEE MORE ON THEREFORMEDPROGRAMMER.NET HOW TO TAKE AN ASP.NET MVC WEB SITE “DOWN FOR MAINTENANCE I am working on an ASP.NET MVC5 e-commerce site and my focus is on how to apply database migrations to such a site (see previous articles on this subject). I have decided that for complex database changes I will take the web site “Down for maintenance” while I make the changes. PART 1: A BETTER WAY TO HANDLE AUTHORIZATION IN ASP.NET I was asked by one of my clients to help build a fairly large web application, and their authentication (i.e. checking who is logging in) and authorization (i.e. what pages/feature the logged in user can access) is very complex. From my experience a knew that using ASP.NET’s Role-based approach wouldn’t cut it, and I found the new ASP.NET Core policy-based approach really clever but it NEW FEATURES FOR UNIT TESTING YOUR ENTITY FRAMEWORK CORE 5 The new features in EF Core 5 that help with unit testing. I am going to cover four features that have changes in the EfCore.TestSupport library, either because of new features in EF Core 5, or improvements that have been added to the library. Creating a PART 4: BUILDING A ROBUST AND SECURE DATA AUTHORIZATION This article covers how to implement data authorization using Entity Framework Core (EF Core), that is only returning data from a database that the current user is allowed to access. The article focuses on how to implement data authorization in such a way that the filtering is very secure, i.e. the filter always works, and the architecture is robust, i.e. the design of the filtering doesn’tASP.NET CORE
Microsoft’s documentation says “ASP.NET Core is designed from the ground up to support and leverage dependency injection”. It also says that “Dependency injection (DI) is a technique for achieving loose coupling between objects and their collaborators, or dependencies.” (read Martin Fowler’s article for the in-depth coverage of DI).. I have used DI for years and I love it – it UPDATING MANY TO MANY RELATIONSHIPS IN ENTITY FRAMEWORK I wrote an article called Updating many to many relationships in entity framework back on 2014 which is still proving to be popular in 2017. To celebrate the release of my book Entity Framework Core in Action I am producing an updated version of that article, but for Entity Framework Core (EF Core).. All the information and the code comes from Chapter 2 of my book. USING IN-MEMORY DATABASES FOR UNIT TESTING EF CORE While writing the book Entity Framework Core in Action I wrote over 600 unit tests, which taught me a lot about unit testing EF Core applications. So, when I wrote the chapter on unit testing, which was the last in the book, I combined what I had learn into a library called EfCore.TestSupport.. Note: EfCore.TestSupport is an open-source library (MIT licence) with a NuGet package available. ENTITY FRAMEWORK: WORKING WITH AN EXISTING DATABASE 1. Creating the Entity Framework Classes from the existing database. Entity Framework has a well documented approach, called reverse engineering, to create the EF Entity Classes and DbContext from an existing database which you can read here. This produces data classes with various Data Annotations to set some of the properties, such asstring
IS THE REPOSITORY PATTERN USEFUL WITH ENTITY FRAMEWORK This is a series: Part 1: Analysing whether Repository pattern useful with Entity Framework (this article).Part 2: Four months on – my solution to replacing the Repository pattern. UPDATE (2018): Big re-write to take into account Entity Framework Core, and further learning. I have just finished the first release of Spatial Modeller™, a medium sized ASP.NET MVC web application. SIX THINGS I LEARNT ABOUT USING ASP.NET CORE’S RAZOR PAGES Six things I learnt about using ASP.NET Core’s Razor Pages. ASP.NET Core 2.0 introduced a new way to build a web site, called Razor Pages. I was interested in the new Razor Pages approach, as I hoped Razor Pages would allow me to code better by following the SOLID principals – and they do. But first I had to work out how to use Razor Pages. THE REFORMED PROGRAMMER The Reformed Programmer – I am a freelance .NET Core back-end developer. Evolving modular monoliths: 3. Passing data between bounded contexts. Last Updated: May 17, 2021 | Created: May 17, 2021. This article describes the different ways you can pass data between isolated sections of your code, known in DDD as bounded contexts. HOW TO TAKE AN ASP.NET MVC WEB SITE “DOWN FOR MAINTENANCEASP NET CORE 3 1ASP NET CORE SAMPLE APP I am working on an ASP.NET MVC5 e-commerce site and my focus is on how to apply database migrations to such a site (see previous articles on this subject). I have decided that for complex database changes I will take the web site “Down for maintenance” while I make the changes. PART 1: A BETTER WAY TO HANDLE AUTHORIZATION IN ASP.NET Part 3: A better way to handle authorization – six months on. Part 4: Building robust and secure data authorization with EF Core. Part 5: A better way to handle authorization – refreshing user’s claims. Part 6: Adding user impersonation to an ASP.NET Core web application. SIX WAYS TO BUILD BETTER ENTITY FRAMEWORK (CORE AND EF6 The six principles and patterns are: Separation of concerns – building the right architecture. The Service Layer – separating data actions from presentation action. Repositories – picking the right sort of database access pattern. Dependency injection – turning your database code into services. ASP.NET CORE RAZOR PAGES: HOW TO IMPLEMENT AJAX REQUESTS ASP.NET Core 2.0 introduced a new way to build a web site, called Razor Pages.I really like Razor Pages, but I needed to work out how to do a few things. In this article I UPDATING MANY-TO-MANY RELATIONSHIPS IN EF CORE 5 AND ABOVESEE MORE ON THEREFORMEDPROGRAMMER.NET ADDING USER IMPERSONATION TO AN ASP.NET CORE WEBSEE MORENEW CONTENT WILL BE ADDED ABOVE THE CURRENT AREA OF FOCUS UPON SELECTIONSEE MORE ON THEREFORMEDPROGRAMMER.NET BUILDING HIGH PERFORMANCE DATABASE QUERIES USING ENTITYSEE MORENEW CONTENT WILL BE ADDED ABOVE THE CURRENT AREA OF FOCUS UPON SELECTIONSEE MORE ON THEREFORMEDPROGRAMMER.NET PART 4: BUILDING A ROBUST AND SECURE DATA AUTHORIZATIONSEE MORENEW CONTENT WILL BE ADDED ABOVE THE CURRENT AREA OF FOCUS UPON SELECTIONSEE MORE ON THEREFORMEDPROGRAMMER.NET IS THE REPOSITORY PATTERN USEFUL WITH ENTITY FRAMEWORKSEE MORE ON THEREFORMEDPROGRAMMER.NET THE REFORMED PROGRAMMER The Reformed Programmer – I am a freelance .NET Core back-end developer. Evolving modular monoliths: 3. Passing data between bounded contexts. Last Updated: May 17, 2021 | Created: May 17, 2021. This article describes the different ways you can pass data between isolated sections of your code, known in DDD as bounded contexts. HOW TO TAKE AN ASP.NET MVC WEB SITE “DOWN FOR MAINTENANCEASP NET CORE 3 1ASP NET CORE SAMPLE APP I am working on an ASP.NET MVC5 e-commerce site and my focus is on how to apply database migrations to such a site (see previous articles on this subject). I have decided that for complex database changes I will take the web site “Down for maintenance” while I make the changes. PART 1: A BETTER WAY TO HANDLE AUTHORIZATION IN ASP.NET Part 3: A better way to handle authorization – six months on. Part 4: Building robust and secure data authorization with EF Core. Part 5: A better way to handle authorization – refreshing user’s claims. Part 6: Adding user impersonation to an ASP.NET Core web application. SIX WAYS TO BUILD BETTER ENTITY FRAMEWORK (CORE AND EF6 The six principles and patterns are: Separation of concerns – building the right architecture. The Service Layer – separating data actions from presentation action. Repositories – picking the right sort of database access pattern. Dependency injection – turning your database code into services. ASP.NET CORE RAZOR PAGES: HOW TO IMPLEMENT AJAX REQUESTS ASP.NET Core 2.0 introduced a new way to build a web site, called Razor Pages.I really like Razor Pages, but I needed to work out how to do a few things. In this article I UPDATING MANY-TO-MANY RELATIONSHIPS IN EF CORE 5 AND ABOVESEE MORE ON THEREFORMEDPROGRAMMER.NET ADDING USER IMPERSONATION TO AN ASP.NET CORE WEBSEE MORENEW CONTENT WILL BE ADDED ABOVE THE CURRENT AREA OF FOCUS UPON SELECTIONSEE MORE ON THEREFORMEDPROGRAMMER.NET BUILDING HIGH PERFORMANCE DATABASE QUERIES USING ENTITYSEE MORENEW CONTENT WILL BE ADDED ABOVE THE CURRENT AREA OF FOCUS UPON SELECTIONSEE MORE ON THEREFORMEDPROGRAMMER.NET PART 4: BUILDING A ROBUST AND SECURE DATA AUTHORIZATIONSEE MORENEW CONTENT WILL BE ADDED ABOVE THE CURRENT AREA OF FOCUS UPON SELECTIONSEE MORE ON THEREFORMEDPROGRAMMER.NET IS THE REPOSITORY PATTERN USEFUL WITH ENTITY FRAMEWORKSEE MORE ON THEREFORMEDPROGRAMMER.NET HOW TO TAKE AN ASP.NET MVC WEB SITE “DOWN FOR MAINTENANCE I am working on an ASP.NET MVC5 e-commerce site and my focus is on how to apply database migrations to such a site (see previous articles on this subject). I have decided that for complex database changes I will take the web site “Down for maintenance” while I make the changes. PART 1: A BETTER WAY TO HANDLE AUTHORIZATION IN ASP.NET I was asked by one of my clients to help build a fairly large web application, and their authentication (i.e. checking who is logging in) and authorization (i.e. what pages/feature the logged in user can access) is very complex. From my experience a knew that using ASP.NET’s Role-based approach wouldn’t cut it, and I found the new ASP.NET Core policy-based approach really clever but it NEW FEATURES FOR UNIT TESTING YOUR ENTITY FRAMEWORK CORE 5 The new features in EF Core 5 that help with unit testing. I am going to cover four features that have changes in the EfCore.TestSupport library, either because of new features in EF Core 5, or improvements that have been added to the library. Creating a PART 4: BUILDING A ROBUST AND SECURE DATA AUTHORIZATION This article covers how to implement data authorization using Entity Framework Core (EF Core), that is only returning data from a database that the current user is allowed to access. The article focuses on how to implement data authorization in such a way that the filtering is very secure, i.e. the filter always works, and the architecture is robust, i.e. the design of the filtering doesn’tASP.NET CORE
Microsoft’s documentation says “ASP.NET Core is designed from the ground up to support and leverage dependency injection”. It also says that “Dependency injection (DI) is a technique for achieving loose coupling between objects and their collaborators, or dependencies.” (read Martin Fowler’s article for the in-depth coverage of DI).. I have used DI for years and I love it – it UPDATING MANY TO MANY RELATIONSHIPS IN ENTITY FRAMEWORK I wrote an article called Updating many to many relationships in entity framework back on 2014 which is still proving to be popular in 2017. To celebrate the release of my book Entity Framework Core in Action I am producing an updated version of that article, but for Entity Framework Core (EF Core).. All the information and the code comes from Chapter 2 of my book. USING IN-MEMORY DATABASES FOR UNIT TESTING EF CORE While writing the book Entity Framework Core in Action I wrote over 600 unit tests, which taught me a lot about unit testing EF Core applications. So, when I wrote the chapter on unit testing, which was the last in the book, I combined what I had learn into a library called EfCore.TestSupport.. Note: EfCore.TestSupport is an open-source library (MIT licence) with a NuGet package available. ENTITY FRAMEWORK: WORKING WITH AN EXISTING DATABASE 1. Creating the Entity Framework Classes from the existing database. Entity Framework has a well documented approach, called reverse engineering, to create the EF Entity Classes and DbContext from an existing database which you can read here. This produces data classes with various Data Annotations to set some of the properties, such asstring
IS THE REPOSITORY PATTERN USEFUL WITH ENTITY FRAMEWORK This is a series: Part 1: Analysing whether Repository pattern useful with Entity Framework (this article).Part 2: Four months on – my solution to replacing the Repository pattern. UPDATE (2018): Big re-write to take into account Entity Framework Core, and further learning. I have just finished the first release of Spatial Modeller™, a medium sized ASP.NET MVC web application. SIX THINGS I LEARNT ABOUT USING ASP.NET CORE’S RAZOR PAGES Six things I learnt about using ASP.NET Core’s Razor Pages. ASP.NET Core 2.0 introduced a new way to build a web site, called Razor Pages. I was interested in the new Razor Pages approach, as I hoped Razor Pages would allow me to code better by following the SOLID principals – and they do. But first I had to work out how to use Razor Pages.Skip to content
THE REFORMED PROGRAMMER I am a freelance .NET Core back-end developerMenu and widgets
* Home
* Contact
* Hire Me
ENTITY FRAMEWORK CORE IN ACTION BOOK You can find my book, Entity Framework Core in Action on Manning's web site. Use the coupon FCCEFCORESMITH to get 37% off.* GitHub
RECENT POSTS
* Part 7 – Adding the “better ASP.NET Core authorization” codeinto your app
* Adding user impersonation to an ASP.NET Core web application * Part 5: A better way to handle authorization – refreshinguser’s claims
* Part 4: Building a robust and secure data authorization with EFCore
* Part 3: A better way to handle ASP.NET Core authorization – sixmonths on
CATEGORIES
* .NET Core
(28)
* ASP.NET Core
(16)
* ASP.NET MVC
(14)
* C# (6)
* Domain-Driven Design(8)
* Entity Framework
(43)
* GenericServices
(10)
* JavaScript
(4)
* NoSQL (2)
* React.js
(4)
* Recommended Books
(1)
* SignalR
(2)
* Software Design
(3)
Search for:
ARCHIVE
* 2019
* 2018
* 2017
* 2016
* 2015
* 2014
* 2013
PART 7 – ADDING THE “BETTER ASP.NET CORE AUTHORIZATION” CODEINTO YOUR APP
Last Updated: September 29, 2019 | Created: September 28, 2019 I have written a series about a better way to handle authorization inASP.NET Core
which add extra ASP.NET Core authorization features. Things like the ability to change role rules without having to edit and redeploy your web app, or adding the ability to impersonate a user without needing their password. This series has quickly become the top articles on my web site, with lots of comments and questions. A lot of people like what I describe in these articles, but they have problems extracting the bits of code to implement the features this article describes. This article, with its improved PermissionAccessControl2repo, is here
to make it much easier to pick out the code you need and put it into your ASP.NET Core application. I then give you a step by step example of selecting a “better authorization” feature and putting it into a ASP.NET Core 2.0 app. I hope this will help people who like the features I describe but found the code really hard to understand, and here is a list of all the articles in the series so that you can find information on eachfeature.
* Part 1: A better way to handle authorization in ASP.NET Core.
* Part 2: Handling data authorization ASP.NET Core and EntityFramework Core
.
* Part 3: A better way to handle authorization – six months on.
* Part 4: Building robust and secure data authorization with EF Core. * Part 5: A better way to handle authorization – refreshinguser’s claims
* Part 6: Adding user impersonation to an ASP.NET Core webapplication
.
* Part 7: Implementing the “better ASP.NET Core authorization” code in your app (THIS ARTICLE).TL;DR; – SUMMARY
* The “better authorisation” series provides a number of extra features to ASP.NET Core’s default Role-based authorisation system. * I have done a major refactor to the companion PermissionAccessControl2repo on GitHub
to make it easier for someone to copy the code they need for the features they want to put in their code. * I have also built separate authorization methods for the different combinations of my authorization features. This means it easier for you to pick the feature that works for you. * I have also improved/simplified some of the code to make it easierto understand.
* I then use a step by step description of what you need to do to add a “better authorization” feature/code into your ASP.NET Coreapplication.
* I have made a copy of the code I produced in these steps available via the PermissionsOnlyApprepo in GitHub.
SETTING THE SCENE – THE NEW STRUCTURE OF THE CODE Over the six articles I have added new features on top of the features that I already implemented. This meant the final version in the PermissionAccessControl2repo was
really complex, which made it hard work for people to pick out the simple version that they needed. In this big refactor I have split the code into separate projects so that its easy to see what each part of the application does. The diagram below shows the updated application projects and how they link to each other. This project may look complex, but many of these projects contain less than ten classes. These projects allow you to copy classes etc. from the projects that cover the features you want, and ignore projects that cover features you don’t want. The other problem area was the database. Again, I wanted the classes relating to the added Authorization code kept separate from the classes I used to provide a simple, multi-tenant example. This let me to have a non-standard database that was hard to create/migrate. The good news is you only need to copy over parts of the ExtraAuthorizeDbContext DbContext into your own DbContext, as you will see in the next sections. THE SEVEN VERSIONS OF THE FEATURES Before I get into the step by step guide, I wanted to list the seven different classes that provide different mixes of features that you can now easily pick and choose. In fact in the PermissionAccessControl2application
allows you to configure different parts of the application so that you can try different features without editing any code. The feature selection is done by the “DemoSetup” section in the appsettings.json file. The AddClaimsToCookies class reads the “AuthVersion” value on startup to select what code to use to set up the authorization. Here are the seven classes that can be used, with their “AuthVersion” value. * Applied when you log in (via IUserClaimsPrincipalFactory> * LoginPermissions: AddPermissionsToUserClaimsclass.
* LoginPermissionsDataKey: AddPermissionsDataKeyToUserClaimsclass.
* Applied via Cookie event * PermissionsOnly: AuthCookieValidatePermissionsOnlyclass
* PermissionsDataKey: AuthCookieValidatePermissionsDataKeyclass
* RefreshClaims : AuthCookieValidateRefreshClaimsclass
* Impersonation : AuthCookieValidateImpersonationclass
* Everything : AuthCookieValidateEverythingclass
Having all these version makes it easier for you to select the feature set that you need for your specific application. STEP-BY-STEP ADDITION TO YOUR ASP.NET CORE APP I am now going to take you through the steps of copying over the most useful asked for in my “better authorization” series – that is the Roles-to-Permissions system which I describe here in the firstarticle
.
This gives you two features that aren’t in ASP.NET Core’s Roles-based authorization,
that is:
* You can change the authorization rules in your application without needing to edit code and re-deploy your application. * You have simpler authorization code in your application (see section “Role authorization and its limitations”
for more on this).
I am also going to use the more efficient UserClaimsPrincipalFactory method (see this section of article 3 for more on this) of adding the Permissions to the user’s claims. This add the correct Permissions to the user’s claims when thye login.
And because .NET Core 3 is now out I’m going to show you how to do this for a ASP.NET Core 3.0 MVC application. > NOTE: I have great respect for Microsoft’s documentation, which > has become outstanding with the advent of NET Core. The amount of > information and updates on NET Core was especially good – > fantastic job everyone. I also have come to love Microsoft’s > step-by-step format which is why I have tried to do the same in this > article. I don’t think I made it as good as Microsoft, but I hope > it helps someone. STEP 1: IS YOUR APPLICATION READY TO TAKE THE CODE? Firstly, you must have some form of authorisation system, i.e. something that checks the user is allowed to login. It can be any for of authorization system (my original client used Auth0 with the OAuth2 setup in ASP.NET Core). The code in my example PermissionAccessControl2repo has some
approaches that work with most authorisation, but some of the more complex ones only work with system that store claims in a cookie (although the approach could be altered to tokens etc.). In my example I’m going to go with a ASP.NET Core 3.0 MVC app with Authentication set to “Individual User Accounts->Store user accounts in-app”. This means there is database added to your application and be default it uses cookies to hold the logged-in user’s Claims. Your system may be different, but with this refactor its easier for you to work out what code you need to copy over. STEP 2: WHAT CODE WILL WE NEED FOR PERMISSIONACCESSCONTROL2 REPO? You need to start out by deciding what parts of the PermissionAccessControl2 projects you need. This will depend on what features you need. Because the projects having names that fit the features this makes it easier to find things. In my example I’m only going to use the core Roles-to-Permissions feature so I only need the code in the following projects: * AuthorizeSetup: just the AddPermissionsToUserClaims class * FeatureAuthorize: All the code. * DataLayer: A cut-down the ExtraAuthorizeDbContext DbContext (no DataKey, no IAuthChange) and some, but not all, of the classes in the ExtraAuthClasses folder. * PermissionParts: All the code. That means I only need to look at about 50% of the projects in the PermissionAccessControl2 repo. STEP 3: WHERE SHOULD I PUT THE PERMISSIONACCESSCONTROL2 CODE IN MYAPP?
This is an architectural/style decision and there aren’t any firm rules here. I lean towards separating my apps into layers (see the diagram in this section of an article on business logic). There are lots of ways I could do it, but I went for a simple design as shown below. STEP 4: MOVING THE PERMISSIONSPARTS INTO DATALAYER That was straightforward – no edits needed and no NuGet packages needed to be installed. STEP 5: MOVING THE EXTRAAUTHCLASSES INTO DATALAYER The ExtraAuthClasses contain code for all features, which makes this part the most complex step as you need to remove parts that aren’t used by features you want to use. Here is a list of what I needed todo.
5.A REMOVE UNWANTED EXTRAAUTHCLASSES. I start by removing and ExtraAuthClasses that I don’t need because I’m not using those features. In my example this means I delete thefollowing classes
* TimeStore – only needed for “refresh claims” feature. * UserDataAccess, UserDataAccessBase and UserDataHierarchical – these are about the Data Authorize feature, which we don’t want.5.B FIX ERRORS
Because I didn’t copy over projects/code for features I don’t want then some things showed up as compile errors, and some just need to bechanged/deleted.
* Make sure the Microsoft.EntityFrameworkCore NuGet package has been added to the DataLayer. * You will also need the Microsoft.EntityFrameworkCore.Relational NuGet package in DataLayer for some of the configuration. * Remove the IChangeEffectsUser and IAddRemoveEffectsUser interfaces from classes – they are for the “refresh claims” feature. * Remove any using statements that don’t link to left out/movedprojects.
> NOTE: I used some classes/interfaces from my EfCore.GenericServices> library to
> handle error handling in some of the methods in my ExtraAuthClasses. > But I am building this app three days after the release of NET Core > 3.0 and I haven’t (yet) updated GenericServices to NET Core 3.0 so > I added a project called GenericServicesStandIn to host the Status> classes.
5.B ADDING EXTRAAUTHCLASSES TO YOUR DBCONTEXT I assume you will have a DbContext that you will use to access the database. Your application DbContext might be quite complex, but in my example PermissionOnlyApp I started with a basic application DbContextas shown below.
public class MyDbContext : DbContext{
//your DbSet: base(options)
{ }
protected override void OnModelCreating(ModelBuilder modelBuilder){
//Your configuration code will go here}
}
Then I needed to add the DbSet{
//your DbSet: base(options)
{ }
protected override void OnModelCreating(ModelBuilder modelBuilder){
//Your configuration code will go here //ExtraAuthClasses extra config modelBuilder.Entity}
}
Now you can replace some of the references to ExtraAuthorizeDbContext references in some of the ExtraAuthClasses. STEP 6: MOVE THE CODE INTO THE ROLESTOPERMISSIONS PROJECT Now we need to move the last of the Roles-to-Permissions code into the RolesToPermissions project. Here are the steps I took. 6A. MOVE CODE INTO THIS PROJECT FROM THE PERMISSIONACCESSCONTROL2VERSION.
What you move in depends on what features you want. If you have some of the complex feature like refresh user’s claims on auth changes, or user impersonation you might want to put each part in a separate folder. But in my example, I only need code from one project and pick one class from another. Here is what I did. * I moved all the code in the FeatureAuthorize project (including the code in the PolicyCode folder). * I then copied over the specific setup code I needed for the feature I wanted I this case this was the AddPermissionsToUserClaims class. At this point there will be lots of errors, but before we sort these out you have to select the specific setup code you need. 6B. ADD PROJECT AND NUGET PACKAGES NEEDED BY THE CODE This code accessing a number of support classes for it to work. The error messages on “usings” on in the code will show you what is missing. The most obvious is this project needs a reference to the DataLayer. Then there are NuGet packages – I only needed three, but you might need more. 6C. FIX REFERENCES AND “USINGS” At this point I still have compile errors, either because the applciation’s DbContext is a different name, and because it has some old (bad) “usings”. The steps are: * The code refers to ExtraAuthorizeDbContext, but now it needs to reference your application DbContext. In my example that is called MyDbContext. You will also need to add a “using DataLayer” toreference that.
* You will have a number incorrect “using” at the top of various files – just delete them. STEP 7 – SETTING UP THE ASP.NET CORE APP We have added all the Roles-to-Permission code we need, but the ASP.NET Core code doesn’t use it yet. In this section I will add code to the ASP.NET Core Startup class to link the AddPermissionsToUserClaims into the UserClaimsPrincipalFactory. Also I assume you have a database your already use to which I have add the ExtraAuthClasses. Here is the code that does all this public void ConfigureServices(IServiceCollection services){
// … other standard service registration removed for clarity //This registers your database, which now includes the ExtraAuthClasses services.AddDbContextservices.AddScoped<
IUserClaimsPrincipalFactory}
> NOTE: In my example app I also set the > options.SignIn.RequireConfirmedAccount to false in the > AddDefaultIdentity method that registers ASP.NET Core Indentity. > This allow me to log in via a user I add at startup (see next > section). This demo user won’t have its email verified so I need > to turn off that constraint. In a real system you might not need> that.
At this point all the code is linked up, but we need an EF Core migration to add the ExtraAuthClasses to your application DbContext. Here is the command I run in Visual Studio’s Package Manager Console – note that it has to have extra parameters because there are two DbContexts (the other one is the ASP.NET Idenity ApplicationDbContext). Add-Migration AddExtraAuthClasses -Context MyDbContext -ProjectDataLayer
STEP 8 – ADD PERMISSIONS TO YOUR CONTROLLER ACTIONS/RAZOR PAGES All the code is in place so now you can use Permissions to protect your ASP.NET Core Controller actions or Razor Pages. I explain Roles-to-Permissions concept in detail in the first article,
so I’m not going to cover that again. I’m just going to show you how a) to add a Permission to the Permissions Enum and then b) protect an ASP.NET Core Controller action with that Permission. 8A. EDIT THE PERMISSIONS.CS FILE In the code you copied into the PermissionsParts folder in the DataLayer you will find a file called Permissions.cs file, which defines the enum called “Permissions”. It has some example Permissions which you can remove, apart for the one named “AccessAll” (that is used by the SuperAdmin user). Then you can add the Permission names you want to use in your application. I typically I give each Permission a number, but you don’t have to – the compiler will do that for you. Here is my updated PermissionsEnum.
public enum Permissions : short //I set this to short because the PermissionsPacker stores them as Unicode chars{
NotSet = 0, //error condition DemoPermission = 10, //This is a special Permission used by the SuperAdmin user. //A user has this permission can access any other permission. AccessAll = Int16.MaxValue,}
8B. ADD HASPERMISSION ATTRIBUTE TO A CONTROLLER ACTION To protect an ASP.NET Core Controller action or Razor Page you use the “HasPermission” attribute to them. Here is an example from thePermissionOnlyApp.
public class DemoController : Controller{
public IActionResult Index(){
return View();
}
}
For this action to be executed the caller must a) by logged in and b) either have the Permission “DemoPermission” in their set of Permissions, or be the SuperAdmin user who has the special Permission called “AccessAll”. At this point you are good to go apart from creating the databases. STEP 9 – EXTRA STEPS TO MAKE IT EASIER FOR PEOPLE TO TRY THEAPPLICATION
I could stop there, but I like to make sure someone can easily run the application to try it out. I therefore am going to add: * Some code in Program to automatically migrate both databases on startup (NOTE: This is NOT the best way to do migrations as it fails in certain circumstances – see my articles on EF Core databasemigrations
).
* I’m going to add a SuperAdmin user to the system on startup if they aren’t there. That way you always have a admin user available to log in on startup – see this section from the part 3 article about the SuperAdmin and what that user can do. I’m not going to detail this because you will have your own way of setting up your application, but it does mean you can just run this application from VS2019 and it should work OK (it does use the SQLlocaldb).
> NOTE: The first time you start the application it will take a VERY > long time to start up (> 25 seconds on my system). This is because > it is applying migrations to two databases. The second time will be> much faster.
CONCLUSION
Quite a few people have contacted me with questions about how they can add the features I described in these series into their code. I’m sorry it was so complex and I hope this new article. I also took the time to improve/simplify some of the code. Hopefully this will make it easier to understand and transfer the ideas and code that goes with this these articles. All the best with your ASP.NET Core applications! Categories ASP.NET CoreTags
ASP.NET Core
,
Authorization
, Identity
, NET Core
0 Comments
ADDING USER IMPERSONATION TO AN ASP.NET CORE WEB APPLICATION Last Updated: September 28, 2019 | Created: August 21, 2019 If you supply a service via a web application (known as Software as a Service, SaaS for short) then you need to support users that have problems with your site. One useful tool for your support team is to be able to access your service as if you were the customer who is having the problem. Clearly you don’t want to use their password (which you shouldn’t know anyway) so you need another way to do this. In this article I describe what is known as _user impersonation_ and describe one way to implement this feature in an ASP.NET Core webapp.
> _NOTE: USER IMPERSONATION DOES HAVE PRIVACY ISSUES AROUND USER DATA, > BECAUSE THE SUPPORT PERSON WILL BE ABLE TO ACCESS THEIR DATA. > THEREFORE, IF YOU DO USE USER IMPERSONATION YOU NEED TO EITHER GET > SPECIFIC PERMISSION FROM A USER BEFORE USING THIS FEATURE, OR ADD IT > TO YOUR T&CS. _ I have spent a lot of work on ASP.NET Core authorization features for both my clients and readers, and this article adds another useful feature in to a long series looking at feature and data authorization. The whole series consists of: * Part 1: A better way to handle authorization in ASP.NET Core – original article. * Part 2: Handling data authorization in ASP.NET Core and EntityFramework Core.
* Part 3: A better way to handle authorization – six months on– improvements
* Part 4: Building robust and secure data authorization with EF Core.
* Part 5: A better way to handle authorization – refreshing usersclaims
.
* Part 6: Adding user impersonation to an ASP.NET Core web application (THIS ARTICLE). * Part 7: Implementing the “better ASP.NET Core authorization”code in your app
There is an example web application in a GitHub repo called PermissionAccessControl2which goes
with articles 3 to 6. This is an open-source (MIT licence) application that you can clone and run locally. Be default it uses in-memory databases which are seeded with demo data on start-up. That means its easy to run and see how the “user impersonation” feature works inpractice.
> _NOTE: TO TRY OUT THE “USER IMPERSONATION” YOU SHOULD RUN THE > PERMISSIONACCESSCONTROL2 ASP.NET CORE PROJECT. THEN YOU NEED TO LOG > IN THE SUPERADMIN USER (EMAIL SUPER@G1.COM WITH PASSWORD > SUPER@G1.COM) AND THEN GO TO THE IMPERSONATION->PICK USER. YOU CAN > SEE THE CHANGES BY USING THE USER’S MENU, OR USING ONE OF THE SHOP> FEATURES._
TL;DR; – SUMMARY
* The user impersonation feature allows a current user, normally a support person, to change their feature and data authorization settings to match another user. This means that the support user will experience the system as if they are the impersonated user. * User impersonation is useful in SaaS systems for investigating/fixing problems that customers encounter. This is especially true in systems where each organization/user has their own data, as it allows the support person to access the user’s data. * I add this feature to my “better authorization” system as described in this series,
but the described approach can also be applied to ASP.NET Core Identity systems using Roles etc. * All the code, plus a working ASP.NET Core example is available via the GitHub repo called PermissionAccessControl2. It is
open-source.
SETTING THE SCENE – WHAT SHOULD A “USER IMPERSONATION” FEATUREDO?
User impersonation gives you access to the services and data that a user has. Typical things you might user impersonation for are: * If a customer is having problems, then user impersonation allows you to access a customer’s data as if you were them. * If a customer reports that there is a bug in your system you can check it out using their own settings and data. * Some customers might pay for enhanced support where your staff can to enter data or fix problems that they are struggling with. _NOTE: IN THE CONCLUSION I LIST SOME OTHER BENEFITS THAT I HAVE FOUND IN DEVELOPMENT AND TESTING._ So, to do any of these things the support person must have: * The same authentication setting, e.g. ASP.NET Core Roles, that the user has (especially for items 1 and 2) – known as _feature authorization,_ which in my system are called _Permission_. * Access to the same data that the user has – known as _data authorization,_ which in my system is called _DataKey_. _NOTE: MANY SAAS SYSTEMS HAVE SEPARATE DATA PER ORGANIZATION AND/OR PER USER. THIS IS KNOWN AS A MULTI-TENANT SYSTEM AND I COVER THIS IN__PART 2_
_
AND __PART 4_
_
ARTICLES IN THE “BETTER AUTHORIZATION” SERIES._ But I don’t want to change current user’s “UserId”, which holds a unique value for each user (e.g. a string containing a GUID). By keeping the support user’s UserId I can use it in any logging or tracking of changes to the database. That way if there are any problems you can clearly see who did what. There is another part of the user’s identity called “Name” which typically holds the user’s email address or name. Because of the data privacy laws such as GDPR I don’t use this in logger or tracking. So, taking all these parts of the user’s identity here is a list below of what I want to impersonate and what I leave as the original (support) user’s setting: * Feature authorization, e.g. Roles, Permissions – USE IMPERSONATED USER’S SETTINGS (see note below) * Data authorization, e.g. data key – USE IMPERSONATED USER’SSETTINGS
* UserId, i.e. unique user value – Keep support user’s UserId * Name, e.g. their email address – Depends: in my system I keep support user’s UserName _NOTE: IN MOST IMPERSONATION CASES THE FEATURE AUTHORIZATIONS SHOULD CHANGE TO SETTINGS OF USER YOU ARE IMPERSONATING – THAT WAY THE SUPPORT USER WILL BE SEE WHAT THE USER SEES. BUT SOMETIMES IT’S USEFUL TO KEEP THE FEATURE AUTHORIZATION OF THE SUPPORT PERSON WHO IS DOING THE IMPERSONATING – THAT MIGHT UNLOCK MORE POWERFUL FEATURES THAT WILL HELP IN QUICKLY FIXING SOME MORE COMPLEX ISSUES. I PROVIDE BOTH OPTIONS IN MY IMPLEMENTATION._ Now I will cover how to add a “user impersonation” feature to an ASP.NET Core web application. THE ARCHITECTURE OF MY “USER IMPERSONATION” FEATURE To implement my “user impersonation” feature I have tapped into ASP.NET Core’s application Cookie events called OnValidatePrincipal (I use this a lot in “better authorization” series). This event happens every HTTP request and provides me with the opportunity to change the user’s _Claims_ (all the user’s settings are held in a Claim class, which are stored as key/value strings in the authentication cookie or token). My user impersonating code is controlled by the existence of a cookie defined in the class ImpersonationCookie.
As shown in the diagram below the code linked to the OnValidatePrincipal event looks for this cookie and goes into impersonation mode while that cookie is present and reverts back to normal if the impersonation cookie is deleted.*
I will describe this process in the following stage: * A look at the impersonation cookie * A look at the ImpersonationHandler * The adaptions to the code called by the OnValidatePrincipal event. * The impersonation services. * Making the sure that the impersonation feature is robust. 1. A LOOK AT THE IMPERSONATION COOKIE I use an ImpersonationCookie to control impersonation: it holds data needed to setup the impersonation and its existence keeps the impersonation going. The code handling the OnValidatePrincipal event can read that cookie via the event’s CookieContext, which includes the HttpContext. The cookie payload (a string) comes from a class called ImpersonationData that holds this data and can convert to/from a string. The threevalues are:
* The UserId of the user to be impersonated * The Name of the user to be impersonated (only used to display the name of the user being impersonated) * A Boolean called KeepOwnPermissions (see next paragraph). While impersonating a user the support person usually takes on the Permissions of the impersonated user so that the application will react as if the impersonated user is logged in. However, I have seen situations where its useful to have the support’s more comprehensive Permissions, for instance to access extra commands that the impersonated user wouldn’t normally have access to. That is why I have the KeepOwnPermissions value, which if true will keep the support’s Permissions. All these three values have some (admittedly low) security issues so I use ASP.NET Core’s Data Protection feature to encrypt/decrypt the string holding the ImpersonationData. 2. A LOOK AT THE IMPERSONATIONHANDLER I have put as much of the code that handles the impersonation into one class called ImpersonationHandler.
On being created it a) checks if the impersonation cookie exists and b) checks if an impersonation claim is found in the current user’s claims. From these two tests it can work out what state the current user in in. The states are listed below: * NORMAL: Not impersonating * STARTING: Starting impersonation. * IMPERSONATING: Is impersonating * STOPPING: Stopping impersonation The only time the Permissions and DataKey need to be recalculated is when the state is STARTING or STOPPING, and the ImpersonationHandler has a property called ImpersonationChange which is true in that case. This minimises calls to recalculation to provide good performance (Note: A recalculate will also happen if you are using the “refreshing user’s claims” feature described in the Part 5article
).
The recalculation of the Permissions and DataKey needs a UserId, and there are there are two public methods, GetUserIdForWorkingOutPermissions and GetUserIdForWorkingDataKey, which provide the correct UserId based on the impersonation state and the “KeepOwnPermissions” (see step 1) setting. (I used two separate methods for the Permissions and DataKey because the “KeepOwnPermissions” will affect the Permissions’ UserId returned but doesn’t affect the DataKey’s UserId). The other public method needed to set the user claims is called AddOrRemoveImpersonationClaim. Its job is to add or remove the “Impersonalising” claim. This claim is used to a) tell the ImpersonationHandler whether it is already impersonating and b) contains the Name of the user being impersonated, which gives a visual display of what user you are impersonating. 3. THE ADAPTIONS TO THE CODE CALLED BY THE ONVALIDATEPRINCIPAL EVENT. Anyone who has been following this series will know that I tap into the authorization cookie OnValidatePrincipal event. This event happens on every HTTP request and allows the claims and the authorization cookie to be changed. For performance reasons you do need to minimise what you do in this event as it is runs sooften.
I have already described in detail the code called by the OnValidatePrincipal event here in Part 1.
Below is the updated PermissionAccessControl2 code, now with the impersonation added. There are some notes at the end that only describe the parts added to provide the impersonation feature. public async Task ValidateAsync(CookieValidatePrincipalContext context){
var extraContext = new ExtraAuthorizeDbContext( _extraAuthContextOptions, _authChanges); var rtoPLazy = new LazyextraContext))
{
var userId = impHandler.GetUserIdForWorkingOutPermissions(); newClaims.AddRange(await BuildFeatureClaimsAsync(userId, rtoPLazy.Value));}
if (originalClaims.All(x => x.Type != DataAuthConstants.HierarchicalKeyClaimName) || impHandler.ImpersonationChange){
var userId = impHandler.GetUserIdForWorkingDataKey(); newClaims.AddRange(BuildDataClaims(userId, dataKeyLazy.Value));}
if (newClaims.Any()){
newClaims.AddRange(RemoveUpdatedClaimsFromOriginalClaims( originalClaims, newClaims)); impHandler.AddOrRemoveImpersonationClaim(newClaims); var identity = new ClaimsIdentity(newClaims, "Cookie"); var newPrincipal = new ClaimsPrincipal(identity); context.ReplacePrincipal(newPrincipal); context.ShouldRenew = true;}
extraContext.Dispose();}
The changes to add impersonalisation to the ValidateAsync code are: * Lines 10 to 11: I create a new ImpersonationHandler to use throughout the method * Line 16 and 27: the “impHandler.ImpersonationChange” property will be true if impersonation is starting or stopping, which are the times where the user’s claims need to be recalculated. * Lines 22 and 29: the UserId to calculate the Permissions and DataKey can alter if in impersonation mode. These impHandler methods controls what UserId value (support user’s UserId or the impersonated user’s UserId). * Line 37: working out the impersonation state relies on having an “Impersonalising” claim while impersonation is active. This method makes sure the “Impersonalising” claim is added on starting impersonation, or removes the impersonation claim when stoppingimpersonation.
The other things to note is the code above contains the “refresh claims” feature described in the Part 5 article.
This means that while impersonating a user the claims will be recalculated. The ImpersonationHandler is designed to handle this, and it will continue to impersonate a user during a “refresh claims”event.
_NOTE: EVEN IF YOU DON’T NEED THE “REFRESH CLAIMS” FEATURE YOU MIGHT LIKE TO TAKE ADVANTAGE OF THE “__HOW TO TELL YOUR FRONT-END THAT THE PERMISSIONS HAVE CHANGED__”
I DESCRIBE IN PART 5. THIS ALLOWS A FRONT-END FRAMEWORK, LIKE REACT.JS, ANGULAR.JS, VUE.JS ETC. TO DETECT THAT THE PERMISSIONS HAVE CHANGED SO THAT IT CAN SHOW THE APPROPRIATE DISPLAYS/LINKS._ 4. THE IMPERSONATION SERVICES The ImpersonationService class has two methods: StartImpersonation and StopImpersonation. They have some error checks, but the actual code is really simple because all they do is create or delete the Impersonation Cookie respectively. The code for both methods are shown below public string StartImpersonation(string userId, string userName, bool keepOwnPermissions){
if (_cookie == null) return "Impersonation is turned off in this application."; if (!_httpContext.User.Identity.IsAuthenticated) return "You must be logged in to impersonate a user."; if (_httpContext.User.Claims.GetUserIdFromClaims() == userId) return "You cannot impersonate yourself."; if (_httpContext.User.InImpersonationMode()) return "You are already in impersonation mode."; if (userId == null) return "You must provide a userId string"; if (userName == null) return "You must provide a username string"; _cookie.AddUpdateCookie(new ImpersonationData( userId, userName, keepOwnPermissions) .GetPackImpersonationData());return null;
}
public string StopImpersonation(){
if (!_httpContext.User.InImpersonationMode()) return "You aren't in impersonation mode.";_cookie.Delete();
return null;
}
The only thing of note in this code is the keepOwnPermissions property in the StartImpersonation method. This controls whether the impersonated user’s Permissions or current support user’s Permissions are used. I have also added two Permissions: Impersonate and ImpersonateKeepOwnPermissions which are used in the impersonationController to control who can access the impersonation feature. 5. MAKING THE SURE THAT THE IMPERSONATION FEATURE IS ROBUST There are a few things that could cause problems or security risks, such as someone logging out while in impersonation mode and the next user logging in and inheriting the impersonation mode. For these reasons I set up a few things to fail safe. Firstly, I set the CookieOptions as shown below, which makes the cookie secure and has a lifetime of the client browser. _options = new CookieOptions{
Secure = false, //ONLY FOR DEMO!!HttpOnly = true,
IsEssential = true,Expires = null,
MaxAge = null
};
Here is notes on these options * Line 3: In real life you would want this to be true because you would be using HTTPS, but for this demo I allow HTTP * Line 4: This says JavaScript can’t read it * Line 5: This is an essential cookie, and setting this to true which means it is allowed without user clearance. * Lines 6 and 7: These two settings make the cookie a session cookie,
which means it is deleted when the client (e.g. browser) is shut down. The other thing I do is delete the impersonation when the user logs out. I do this by capturing another event called OnSigningOut to delete the impersonation cookie. services.ConfigureApplicationCookie(options =>{
options.Events.OnValidatePrincipal = authCookieValidate.ValidateAsync; //This ensures the impersonation cookie is deleted when a user signs out options.Events.OnSigningOut = authCookieSigningOut.SigningOutAsync;});
Finally I update the _LoginPartial.cshtml to make it clear you are in impersonation mode and who you are impersonating. I replace the “Hello @User.Identity.Name” with “@User.GetCurrentUserNameAsHtml()”, which shows “Impersonating joe@gmail.com” with the bootstrap class, text-danger. Here is the code that does that. public static HtmlString GetCurrentUserNameAsHtml(this ClaimsPrincipal claimsPrincipal){
var impersonalisedName = claimsPrincipal.GetImpersonatedUserNameMode(); var nameToShow = impersonalisedName ?? claimsPrincipal.Claims.SingleOrDefault(x => x.Type == ClaimTypes.Name)?.Value ?? "not logged in"; return new HtmlString( "Impersonating " : ">Hello ") + $"{nameToShow}");}
THE DOWNSIDES OF THE USER IMPERSONATION FEATURE There aren’t any big extra performance issues with the impersonation feature, but of course the use of the authorization cookie OnValidatePrincipal event does add a (very) small extra cost on everyHTTP request.
One downside is that it’s quite complex with various classes, services and startup code. If you want a simpler impersonation approach I would recommend Max Vasilyev’s “User impersonation inAsp.Net Core
”
article that directly uses ASP.NET Core’s Identity features. Another downside is the security issue of a support user being able to see and change a user’s data. I strongly recommend adding logging around the impersonation feature and also marking all data with the person who created/edited it using the UserId, which I make sure is the real (support) user’s UserId. That way if there is a problem you can show who did what.CONCLUSION
Over the years I have found user impersonation really useful (I wrote about this on ASP.NET MVC back in 2015). This article now provides user impersonation for ASP.NET Core and also fits in with my “better authorization” approach. All the code, with a working example web application, is available in the GitHub repo called PermissionAccessControl2.
As well as allowing you to provide good help to your users I have found the impersonation feature great for helping in early stages of development. For instance, say you haven’t got the user support safe enough for your SaaS user to use, then you can have your support people manage that until you have implemented a version suitable forSaaS users.
User impersonation is also really useful for live testing, because you can quickly swap between different types of users to check that the system is working as you designed (You might also like to look at my “Getting better data for unit testing your EF Core applications”
article on how to extract and anonymize personal data from a livesystem too).
You may not want to use my “better authorization” approach, but use ASP.NET Core’s Roles. In that case the approach I use, with the authorization cookie OnValidatePrincipal event, is still valid. Its just that you need to alter the Roles Claim in the user’s Claims. The ImpersonationHandler class will still be useful for you, as it simply detects the impersonalisation and provides the UserId of the impersonalised user – from there you can look up the impersonalised user’s Roles and replace them in the user’s claims.Happy coding!
_IF YOU HAVE A ASP.NET CORE OR ENTITY FRAMEWORK CORE PROBLEM THAT YOU WANT HELP ON THEN PLEASE CONTACT ME VIA MY __CONTACT__ PAGE. I’M
FREELANCE CONTRACTOR SPECIALIZING ASP.NET CORE, EF CORE AND AZURE._ Categories ASP.NET Core0
Comments
PART 5: A BETTER WAY TO HANDLE AUTHORIZATION – REFRESHING USER’SCLAIMS
Last Updated: August 22, 2019 | Created: July 29, 2019 This article focuses on the ability to update a logged in user’s authorization as soon as any of the authorization classes in the database are changed – I refer to this as “refresh claims” (see “Setting the Scene” !!! for a longer explanation). This article was inspired by a number of people asking for this feature in my alternative feature/data authorization approach originally described in the first articlein this series.
The original article is very popular, with lots of questions and comments. I therefore came back about six months after the first article to answer some of the more complex question by creating a new PermissionAccessControl2example web
application and the following three extra articles: * A better way to handle authorization – six months on.
* Building robust and secure data authorization with EF Core.
* Adding user impersonation to an ASP.NET Core web application.
UPDATE: SEE MY NDC OSLO 2019 TALK WHICH COVERS THESETHREE ARTICLES.
You can find the original articles at: * A better way to handle authorization in ASP.NET Core – original article. * Handling data authorization in ASP.NET Core and Entity FrameworkCore.
_NOTE: YOU CAN SEE THE “REFRESH CLAIMS” FEATURE IN ACTION BY CLONING THE __PERMISSIONACCESSCONTROL2__ EXAMPLE WEB
APPLICATION AND THEN RUNNING THE PERMISSIONACCESSCONTROL2 PROJECT. BY DEFAULT, IT IS SET UP TO USE IN-MEMORY DATABASES SEEDED WITH DEMO DATA AND THE “REFRESH CLAIMS” FEATURE. SEE THE “REFRESH CLAIMS”MENU ITEM._
TL;DR; – SUMMARY
* Typically, when you log in to an ASP.NET Core web app the things you can do, known as _authorization_, is “frozen”, i.e. it is fixed for however long you stay logged in. * An alternative is to update a logged-in user’s authorization whenever the internal, database versions of the authorization is updated. I call this “refreshing claims” because authorization data is stored in a user’s Claims. * This article describes how I have added this “refresh claims” feature to my alternative feature/data authorization approach described in this series. * While the “refresh claims” code I show applies to my alternative feature/data authorization code the same approach can be applied to any standard ASP.NET Core Role or Claim authorizationsystem.
* There are some (small?) downside to adding this feature around complexity and performance, which I cover near the end of thisarticle.
* The code in this article can be found in the open-source PermissionAccessControl2GitHub repo,
which also includes a runnable example web application. SETTING THE SCENE – WHY REFRESH THE USER’S CLAIMS? If you are using the built-in ASP.NET Core’s Identity system, then when you log in you get a set of Role and Claims which defined what you can do – known as _authorisation_ in ASP.NET. By default, once you are logged in your authorisation is fixed for however long you stay logged in. This means that any changed the internal authorisation setting, then you need to log out and log back in again before you inherit these new settings. Most systems work this way because it’s simple and covers most of the authentication/authorization requirements of standard web apps. But there are situations where you need any change to authorization to be immediately applied to the user – what I call “refreshing claims”. For instance, in a high-security system like a bank you might want to be able revoke certain authentication features immediately from a logged-in user/users. Another use case would be where users can trial a paid-for feature, but once the trial period you want the feature to turn off immediately. So, if you need refresh feature then how can you implement it? One approach would be to recalculate the user’s authorisation settings every time they access the system – that would work but would add a performance hit due to all the extra database accesses and recalculations required on every HTTP request. Another approach would be to revoke/time-out the authentication token or cookie and have the system recalculate the authentication token or cookie again. In the next section I describe how I added the “refresh claims” to my feature authentication approach. THE ARCHITECTURE OF MY “REFRESH CLAIMS” FEATURE In the earlier articles I described a replacement authorization system which had the advantage over the ASP.NET Core’s Roles-based authorisation in that the Admin user can change all aspects of the user’s authorisation (with ASP.NET Core’s Roles-based authorisation you need to edit/redeploy the code to alter what controller methods a Role can access). The figure below shows an abbreviated version of how the feature part of authorisation process which is run on login (see the Part3 article for a more in-depth explanation). But to implement the “refresh claims” feature I need a way to alter the permissions while the user is logged in. My solution is to use an authorization cookie event that happens every HTTP request. This allows me to change the user’s authorisations at any time. To make this work I set a “LastUpdated” time when any of the database classes that manages authorization are changed. This is then compared with the “LastUpdated” claim in the user’s Claims – see the diagram below which shows this process. Parts in BLUE BOLD TEXT show what changes over time. I’m going to describe the stages involved in this in the followingorder
* How to detect that the Roles/Permissions have changed. * How to store the last time the Roles/Permissions changed. * Linking the authorization code to the database cache. * How to detect/update the user’s permissions Claim. * How to tell your front-end that the Permissions have changed. 1. HOW TO DETECT THAT THE ROLES/PERMISSIONS HAVE CHANGED. A user’s Permissions could be out of date whenever the User Roles, the Role’s Permissions, or the User Modules are changed. To do this I add some detection code to the SaveChanges/ SaveChangesAsync methods in DbContext that manages those database classes, called ExtraAuthorizeDbContext. _NOTE: PUTTING THE DETECTION CODE INSIDE THE SAVECHANGES AND SAVECHANGESASYNC METHODS PROVIDES A ROBUST SOLUTION BECAUSE IT DOESN’T RELY ON THE DEVELOPER TO ADDING CODE TO ALL THE SERVICES THAT CHANGES THE AUTHORIZATION DATABASE CLASSES. _ Here is the code in the SaveChanges method // I only have to override these two versions of SaveChanges, // as the other two SaveChanges versions call these public override int SaveChanges(bool acceptAllChangesOnSuccess){
if (ChangeTracker.Entries().Any(x => (x.Entity is IChangeEffectsUser && x.State == EntityState.Modified) || (x.Entity is IAddRemoveEffectsUser && (x.State == EntityState.Added || x.State == EntityState.Deleted))){
_authChange.AddOrUpdate(this);}
return base.SaveChanges(acceptAllChangesOnSuccess);}
The things to note are: * Lines 6 to 9 This is looking for changes that could affect anexisting user.
* Line 11: If a change is found it calls the AddOrUpdate method in the IAuthChange instance that is injected into the ExtraAuthorizeDbContext. I describe the AuthChange class in section 3. 2. HOW TO STORE THE LAST TIME THE ROLES/PERMISSIONS CHANGED. Once the SaveChanges have detected a change we need to store the time that change happens. This is done via a class called TimeStore whichis shown below.
public class TimeStore{
public string Key { get; set; } public long LastUpdatedTicks { get; set; }}
This is a Key/Value cache, where the Value is a long (Int64) containing the time as ticks when the item was changes. I did this way because I would use this same store to contain changes in any my hierarchical DataKeys (see Part4 article),
which I don’t cover in this article. _NOTE: IN THE PART4 ARTICLE I DESCRIBE A MULTI-TENANT SYSTEM WHICH IS HIERARCHICAL. IN THAT CASE IF I MOVE A SUBGROUP (E.G. WEST COAST DIVISION) TO A DIFFERENT PARENT, THEN THE DATAKEY WOULD CHANGE, ALONG WITH ALL ITS “CHILD” DATA. IN THIS CASE YOU MUST REFRESH ANY LOGGED-IN USER’S DATAKEY OTHERWISE A LOGGED-IN USER WOULD HAVE ACCESS TO THE WRONG DATA. THAT IS WHY I USED A GENERALIZED TIMESTORE SO THAT I COULD ADD A PER-COMPANY “LASTUPDATED” VALUE._ I also add a the ITimeStore interface ExtraAuthorizationDbContext which the AuthChanges class (see next section) can use. The ITimeStore defines two methods: * GetValueFromStore, which reads a value from the TimeStore * AddUpdateValue, which adds or update the TimeStore You will see these being used in the next section. 3. LINKING THE AUTHORIZATION CODE TO THE DATABASE CACHE. I created a small project called CommonCache which lives at the bottom of the solution structure, i.e. it doesn’t reference to any other project. This contains AuthChange class, which links between the database and the code handling the authorization. This AuthChange class provides a method that the authorization code can call to check if the user’s authorization Claims are out of date. And at the database end it creates the correct cache key/value when the database detects a change in the authorization databaseclasses.
Here is the AuthChange class code. public class AuthChanges : IAuthChanges{
public bool IsOutOfDateOrMissing(string cacheKey, string ticksToCompareString, ITimeStore timeStore){
if (ticksToCompareString == null) //if there is no time claim then you do need to reset the claimsreturn true;
var ticksToCompare = long.Parse(ticksToCompareString); return IsOutOfDate(cacheKey, ticksToCompare, timeStore);}
private bool IsOutOfDate(string cacheKey, long ticksToCompare, ITimeStore timeStore){
var cachedTicks = timeStore.GetValueFromStore(cacheKey); if (cachedTicks == null) throw new ApplicationException( $"You must seed the database with a cache value for the key {cacheKey}."); return ticksToCompare < cachedTicks;}
public void AddOrUpdate(ITimeStore timeStore){
timeStore.AddUpdateValue(AuthChangesConsts.FeatureCacheKey, DateTime.UtcNow.Ticks);}
}
The things to note are: * Lines 3 to 12: The IsOutOfDateOrMissing method is called by the ValidateAsync method (described in the next section) uses to find out if the User’s claims need recalculating, i.e. it returns true if the User’s claims “LastUpdated” is missing, or it is earlier then the database “LastUpdated” time. You can see the cache read in line 17 inside the private method that does the time compare. * Lines 25 to 29: The AddOrUpdate method makes sure the ITimeStore has an entry under the key defined by FeatureCacheKey which has the current time in ticks. This is referred to as the database “LastUpdated” value. 3. HOW TO DETECT/UPDATE THE USER’S PERMISSIONS CLAIM In the Part1 article I showed how you can add claims to the user at login time via the Authentication Cookie’s OnValidatePrincipal event, but these claims are “frozen”. However, this event is perfect for our “refresh claims” feature because the event happens on every HTTP request. So, in the new version 2 PermissionAccessControl2code I have
altered the code to add the “refresh claims” feature. Below is the new version of the ValidateAsync method, with comments on the key parts of the code at the bottom. public async Task ValidateAsync(CookieValidatePrincipalContext context){
var extraContext = new ExtraAuthorizeDbContext( _extraAuthContextOptions, _authChanges); //now we set up the lazy values - I used Lazy for performance reasons var rtoPLazy = new LazyextraContext))
{
var userId = originalClaims.SingleOrDefault(x => x.Type == ClaimTypes.NameIdentifier)?.Value; newClaims.AddRange(await BuildFeatureClaimsAsync(userId, rtoPLazy.Value));}
//… removed DataKey code as not relevant to this article if (newClaims.Any()){
newClaims.AddRange(RemoveUpdatedClaimsFromOriginalClaims( originalClaims, newClaims)); var identity = new ClaimsIdentity(newClaims, "Cookie"); var newPrincipal = new ClaimsPrincipal(identity); context.ReplacePrincipal(newPrincipal); context.ShouldRenew = true;}
extraContext.Dispose(); //be tidy and dispose the context.}
private IEnumerable{
var newClaimTypes = newClaims.Select(x => x.Type); return originalClaims.Where(x => !newClaimTypes.Contains(x.Type));}
private async Task- > BuildFeatureClaimsAsync( string userId, CalcAllowedPermissions rtoP)
{
var claims = new List{
new Claim(PermissionConstants.PackedPermissionClaimType, await rtoP.CalcPermissionsForUserAsync(userId)), new Claim(PermissionConstants.LastPermissionsUpdatedClaimType, DateTime.UtcNow.Ticks.ToString())};
return claims;
}
The things to note are: * Lines 13 to 18: This checks if the PackedPermissionClaimType Claim is missing, or that the LastPermissionsUpdatedClaimType Claim’s value is either out of date or missing. If either of these are true then it has to recalculate the user’s Permissions, which you can seein lines 19 to 23.
* Lines 46 to 57: This adds the two claims needed: the PackedPermissionClaimType Claim with the user’s recalculated Permissions, and the LastPermissionsUpdatedClaimType Claim which is given the current time. 5. HOW TO TELL YOUR FRONT-END THAT THE PERMISSIONS HAVE CHANGED If you are using some form of front-end framework, like React.js, Angular.js, Vue.js etc. then you will use the Permissions in the front-end to select what buttons, links etc to show. In the Part3 article I showed a very simple API to get the Permissions names,
but now we need to know when to update the local Permissions in yourfront end.
My solution is to add a header to every HTTP return that gives you the “LastUpdated” time when the current user’s Permissions where updated. By saving this value in the JavaScript SessionStorageyou can
compare the time provided in the header with the last value you had – if they are different then you need to re-read the permissions forthe current user.
Its pretty easy to add a header, and here is the code inside the Configure method inside the Startup class in your ASP.NET Core project. Here is the code (with thanks to SO answer https://stackoverflow.com/a/48610119/1434764). //This should come AFTER the app.UseAuthentication() call if (Configuration == "True"){
app.Use((context, next) =>{
var lastTimeUserPermissionsSet = context.User.Claims .SingleOrDefault(x => x.Type == PermissionConstants.LastPermissionsUpdatedClaimType)?.Value;
if (lastTimeUserPermissionsSet != null) context.Response.Headers = lastTimeUserPermissionsSet; return next.Invoke();});
}
THE DOWNSIDES OF ADDING “REFRESH CLAIMS” FEATURE While the “refresh claims” feature is useful it does have some downsides. Firstly it is a lot more complex than using the UserClaimsPrincipalFactory approach explained in the Part3 article.
Complexity makes the application harder to understand and can beharder to refactor.
Also, I only got the “refresh claims” feature to work for Cookie authentication, while the “frozen” implementation I showed in the Part3 article works with both Cookie or Token authentication. If you need a token solution then a good starting point is the https://www.blinkingcaret.com/ blog (you might find this article useful “Refresh Tokens in ASP.NET Core Web Api”).
The other issue is performance. For every HTTP request a read of the TimeStore is required. Now that request is very small and only take about 750ns on my I7, 4GHz Windows development PC, but with lots of simultaneous users you would be loading up the database. But the good news is that using a database means it automatically works with multiple instances of a web application (known as scale-out).
_NOTE: I DID TRY ADDING AN ASP.NET CORE DISTRIBUTED MEMORY CACHE TO IMPROVE LOCAL PERFORMANCE, BUT BECAUSE THE ONVALIDATEPRINCIPAL EVENT LIVES OUTSIDE THE DEPENDENCY INJECTION YOU END UP WITH DIFFERENCE INSTANCES OF THE MEMORY CACHE (TOOK ME A WHILE TO WORK THAT OUT!). YOU COULD ADD A CACHE LIKE REDIS BECAUSE IT RELIES ON CONFIGURATION RATHER THAN THE SAME INSTANCE, BUT IT DOES ADD ANOTHER LEVEL OF COMPLEXITY._ The other performance issue is it has to refresh EVERY logged in user, as it doesn’t have enough information to target the specific users that need an update. If you have thousands of concurrent users that will bring a higher-than-normal load on the application and the database. Overall recalculating the Permissions aren’t that onerous, but it may be worth changing any roles and permissions outside the site’s peak usage times. Overall, I would suggest you think hard as to whether you need the “refresh claims” feature. Most authentication systems don’t have “refresh claims” feature as standard, so remember the Yagni (“You Aren’t GonnaNeed It”) rule.
CONCLUSION
This article has focused on one specific feature that readers of my first article felt was needed. I believe my solution to the “refresh claims” feature is robust, but there are some (small?) downsides which I have listed. You can find all the code in this article, and a runnable example application, in the GitHub repo PermissionAccessControl2.
When I first developed the whole feature/data authorization approach for one of my clients we discussed whether we needed the “refresh claims” feature. They decided it wasn’t worth the effort and I think that was right decision for their application. But if your application/users need the refresh claims feature then you now have a fully worked out approach which will still work even on web apps that scale out, i.e. run multiple instances of the web app to give better scalability.Happy coding!
PS. Have a look at Andrew Lock’s excellent series “Adding feature flags to an ASP.NET Core app”
for another useful feature to add to your web app. IF YOU HAVE A ASP.NET CORE OR ENTITY FRAMEWORK CORE PROBLEM THAT YOU WANT HELP ON THEN I AM AVAILABLE AS A FREELANCE CONTRACTOR. PLEASE SEND ME A CONTACT REQUEST VIA MY CONTACT PAGE AND WE CAN TALKSOME MORE ON SKYPE.
Categories .NET Core, ASP.NET
Core 6
Comments
PART 4: BUILDING A ROBUST AND SECURE DATA AUTHORIZATION WITH EF CORE Last Updated: September 28, 2019 | Created: July 9, 2019 This article covers how to implement data authorization using Entity Framework Core (EF Core), that is only returning data from a database that the current user is allowed to access. The article focuses on how to implement data authorization in such a way that the filtering is very secure, i.e. the filter always works, and the architecture is robust, i.e. the design of the filtering doesn’t allow developer to accidently bypasses that filtering. This article is part of a series and follows a more general article ondata authorization
I wrote six months ago. That first article introduced the idea of data authorization, but this article goes deeper and looks at one way to design a data authorization system that is secure and robust. It uses a new, improved example application, called PermissionAccessControl2(referred to
as “version 2”) and a series of new articles which cover other areas of improvement/change. * Part 3: A better way to handle authorization – six months on.
* Part 4: Building robust and secure data authorization with EF Core(THIS ARTICLE).
* Part 5: A better way to handle authorization – refreshinguser’s claims.
* Part 6: Adding user impersonation to an ASP.NET Core webapplication
* Part 7: Implementing the “better ASP.NET Core authorization”code in your app
UPDATE: SEE MY NDC OSLO 2019 TALK WHICH COVERS THESETHREE ARTICLES.
Original articles:
* A better way to handle authorization in ASP.NET Core – original article. * Handling data authorization in ASP.NET Core and Entity FrameworkCore.
TL;DR; – SUMMARY
* This article provides a very detailed description of how I build a hierarchical, multi-tenant application where the data a user could access depends on which company and what role they have in thatcompany.
* The example application is built using ASP.NET Core and EF Core. You can look at the code and run the application, which has demo data/users, by cloning this GitHub repo.
* This article and its new (version 2) example application is a lot more complicated than the data authorization described in the original (Part 2) data authorisation article.
If you want to start with something, then read the original (Part 2)first.
* The key feature that makes it work are EF Core’s Query Filters, which
provides a way to filter data in ALL EF Core database queries. * I break the article into two parts: * Making it Secure, which covers how I implemented a hierarchical, multi-tenant application that filters data based on the user’s givendata access rules.
* Making it robust, which is about designing an architecture that guides developers so that they can’t accidently bypass the security code that has been written. SETTING THE SCENE – EXAMPLES OF DATA AUTHORIZATION Pretty much every web application with a database will filter data – Amazon doesn’t show you every product it has, but tried to show you things you might be interested in. But this type of filtering for the convenience of the user and is normally part of the application code. Another type of database filtering is driven from security concerns – I refer to this this as _data authorization_. This isn’t about filtering data for user convenience, but about applying strict business rules that that dictate what data a user can see. Typical scenarios where data authorization is needed are: * Personal data, where only the user can read/alter their personaldata.
* Multi-tenant systems, where one database is used to support multiple, separate user’s data. * Hierarchical systems, for instance a company with divisions where the CEO can see all the sales data, but each division can only see their own sales data. * Collaboration systems like GitHub/BitBucket where you can invite people to work with you on a project etc. _NOTE: IF YOU ARE NEW TO THIS AREA THEN PLEASE SEE THIS SECTION OF THEORIGINAL ARTICLE
WHERE I HAVE A LONGER INTRODUCTION TO DATA PROTECTION, BOTH WHAT’S IT ABOUT AND WHAT THE DIFFERENT PART ARE._ My example is a both a multi-tenant and a hierarchical system, which is what one of my client’s needed. The diagram below shows two companies 4U Inc. and Pets2 Ltd. with Joe, our user in charge of the LA division of 4U’s outlets. The rest of this article will deal with how to build an application which gives Joe access to the sales and stock situation in both LA outlets, but no access to any of the other divisions’ data or other tenants like Pets2 Ltd. In the next sections I look at the two aspects: a) making my data authorization design secure, i.e. the filter always works, and, b) its architecture is robust, i.e. the design of the filtering doesn’t allow developer to accidently bypasses that filtering. A. BUILDING A SECURE DATA AUTHORIZATION SYSTEM I start with the most important part of the data authorization – making sure my approach is secure, that is, it will correctly filter out the data the user isn’t allowed to see. The cornerstone of this design is EF Core’s Query Filters, but to
use them we need to set up a number of other things too. Here is a list of the key parts, which I then describe in detail: * ADDING A DATAKEY TO EACH FILTERED CLASS: Every class/table that needs data authorization must have a property that holds a security key, which I call the DataKey. * SETTING THE DATAKEY PROPERTY/COLUMN: The DataKey needs to be set to the right value whenever a new filtered class is added to thedatabase.
* ADD THE USER’S DATAKEY TO THEIR CLAIMS: The user claims need to contain a security key that is matched in some way to the DataKey inthe database.
* FILTER VIA THE DATAKEY IN EF CORE: The EF Core DbContext has the user’s DataKey injected into it and it uses that key in EF CoreQuery Filters
to only
return the data that the user is allowed to see. A1. ADDING A DATAKEY TO EACH FILTERED CLASS EVERY CLASS/TABLE THAT NEEDS DATA AUTHORIZATION MUST HAVE A PROPERTY THAT HOLDS A SECURITY KEY, WHICH I CALL THE DATAKEY. In my example application there are multiple different classes that need to be filtered. I have a base IDataKey interface which defines the DataKey string property. All the classes to be filter inherit this interface. The DataKey definition looks like this As you will see the DataKey is used a lot in this application. A2. SETTING THE DATAKEY PROPERTY/COLUMN THE DATAKEY NEEDS TO BE SET TO THE RIGHT VALUE WHENEVER A NEW FILTERED CLASS IS ADDED TO THE DATABASE. _NOTE: THIS EXAMPLE IS COMPLEX BECAUSE OF THE HIERARCHICAL DATA DESIGN. IF YOU WANT A SIMPLE EXAMPLE OF SETTING A DATAKEY SEE THE “PERSONAL DATA FILTERING” EXAMPLE IN THE ORIGINAL, PART 2 ARTICLE._ Because of the hierarchical nature of my example the “right value” is bit complex. I have chosen to create a DataKey than is a combination of the primary keys of all the layers in the hierarchy. As you will see in the next subsection this allows the query filter to target different levels in the multi-tenant hierarchy. The diagram below shows you the DataKeys (bold, containing numbers and |) for all the levels in the 4U Company hierarchy. The hierarchy consists of three classes, Company, SubGroup and RetailOutlet, which all inherit from an abstract class called TenantBase. This allows me to create relationships between any of the class types that inherit from TenantBase (EF Core to treat these classes as a Table-Per-Hierarchy, TPH, and stores all the different types in one table). But setting the DataKey on creation is difficult because the DataKey needs the primary key, which isn’t set until its created in the database. My way around this is to use a transaction. Here is the method in the TennantBase class that is called by the Company, SubGroup or RetailOutlet static creation factory. protected static void AddTenantToDatabaseWithSaveChanges (TenantBase newTenant, CompanyDbContext context){
// … Various business rule checks let out using (var transaction = context.Database.BeginTransaction()){
//set up the backward link (if Parent isn't null) newTenant.Parent?._children.Add(newTenant); context.Add(newTenant); //also need to add it in case its the company // Add this to get primary key set context.SaveChanges(); //Now we can set the DataKey newTenant.SetDataKeyFromHierarchy(); context.SaveChanges(); transaction.Commit();}
}
The Stock and Sales classes are easier to handle, as they use the user’s DataKey. I override EF Core’s SaveChanges/SaveChangesAsync to do this, using the code shown below. public override int SaveChanges(bool acceptAllChangesOnSuccess){
foreach (var entityEntry in ChangeTracker.Entries() .Where(e => e.State == EntityState.Added)){
if (entityEntry.Entity is IShopLevelDataKey hasDataKey) hasDataKey.SetShopLevelDataKey(accessKey);}
return base.SaveChanges(acceptAllChangesOnSuccess);}
A3. ADD THE USER’S DATAKEY TO THEIR CLAIMS: THE USER CLAIMS NEED TO CONTAIN A SECURITY KEY THAT IS MATCHED IN SOME WAY TO THE DATAKEY IN THE DATABASE. My general approach as detailed in the original data authorizationarticle
is to have claim in the user’s identity that is used to filter data on. For personal data this can be the user’s Id (typically a string containing a GUID), or for a straight-forward multi-tenant it would by some form of tenant key stored in the user’s information. For this you might have the following class in your extra authorization data: public class UserDataAccessKey{
public UserDataAccessKey(string userId, string accessKey){
UserId = userId ?? throw new ArgumentNullException(nameof(userId)); AccessKey = accessKey;}
public string UserId { get; private set; } public string AccessKey { get; private set; }}
In this hierarchical and multi-tenant example it gets a bit more complex, mainly because the hierarchy could change, e.g. a company might start with simple hierarchy of company to shops, but as it grows it might move a shop to sub-divisions like the west-coast. This means the DataKey can change dynamically. For that reason, I link to the actual Tenant class that holds the DataKey that the user should use. This means that we can look up the current DataKey of the Tenant when the user logs in._NOTE: IN PART 5
I TALK ABOUT HOW TO DYNAMICALLY UPDATE THE USER’S DATAKEY CLAIM OF ANY LOGGED IN USER IF THE HIERARCHY THEY ARE IN CHANGES. _ public class UserDataHierarchical{
public UserDataHierarchical(string userId, TenantBase linkedTenant){
if (linkedTenant.TenantItemId == 0) throw new ApplicationException( "The linkedTenant must be already in the database."); UserId = userId ?? throw new ArgumentNullException(nameof(userId)); LinkedTenant = linkedTenant;}
public int LinkedTenantId { get; private set; } public TenantBase LinkedTenant { get; private set; } public string UserId { get; private set; } public string AccessKey { get; private set; }}
This dynamic DataKey example might be an extreme case but seeing how I handled this might help you when you come across something that is more complex than a simple value. At login you can add the feature and data claims to the user’s claims using the code I showed in Part 3, where I add to the user’s claims via a UserClaimsPrincipalFactory, as described in this section of the Part 3 article.
The diagram below shows how my factory method would lookup the UserDataHierarchical entry using the User’s Id and then adds the current DataKey of the linked Tenant. A4. FILTER VIA THE DATAKEY IN EF CORE THE EF CORE DBCONTEXT HAS THE USER’S DATAKEY INJECTED INTO IT AND IT USES THAT KEY IN EF CORE QUERY FILTERS TO ONLY RETURN THE DATA THAT THE USER IS ALLOWED TO SEE. EF Core’s Query Filtersare a new
feature added in EF Core 2.0 and they are fantastic for this job. You define a query filter in the OnModelCreating configuration method inside your DbContext and it will filter ALL queries, that comprises of LINQ queries, using Find method, included navigation properties and it even adds extra filter SQL to EF Core’s FromSql method (FromSqlRaw or FromSqlInterpolated in EF Core 3+). This makes Query Filters a very secure way to filter data. For the version 2 example here is a look the CompanyDbContext class with the query filters set up by the OnModelCreating method towards the end of the code. public class CompanyDbContext : DbContext{
internal readonly string DataKey; public DbSet: base(options)
{
DataKey = claimsProvider.DataKey;}
//… override of SaveChanges/SaveChangesAsync left out protected override void OnModelCreating(ModelBuilder modelBuilder){
//… other configurations left out AddHierarchicalQueryFilter(modelBuilder.Entity}
private static void AddHierarchicalQueryFilter{
builder.HasQueryFilter(x => x.DataKey.StartsWith(Datakey)); builder.HasIndex(x => x.DataKey);}
}
As you can see the Query Filter uses the StartsWith method to compare the user’s DataKey and the DataLeys of the tenants and their Sales/Stock data. This means if Joe has the DataKey of 1|2|5| then he can see the Stock/Sales data for the shops “LA Dress4U” and “LA Shirt4U” – see diagram below. _NOTE: YOU CAN TRY THIS BY CLONING THE PERMISSIONACCESSCONTROL2REPO AND
RUNNING IT LOCALLY (BY DEFAULT IT USES AN IN-MEMORY TO MAKE IT EASY TO RUN). PICK DIFFERENT USERS TO SEE THE DIFFERENT DATA/FEATURES YOU CANACCESS._
B. BUILDING A ROBUST DATA AUTHORIZATION ARCHITECTURE We could stop here because we have covered all the code needed to secure the data. But I consider data authorization as a high-risk part of my system, so I want to make it “secure by design”, i.e. it shouldn’t be possible for a developer to accidently write something that bypasses the filtering. Here are the things I have done to make my code robust and guide another developer on how to do things. * USE DOMAIN-DRIVEN DESIGN DATABASE CLASSES. Some of the code, especially creating the Company, SubGroup and RetailOutlet DataKeys, are complicated. I use DDD-styled classes which provides only one way to create or update the various DataKeysand relationships.
* FILTER-ONLY DBCONTEXT: I build a specific DbContext to contain all the classes/tables that need to be filtered. * UNIT TEST TO CHECK YOU HAVEN’T MISSED ANYTHING: I build unit tests that ensure the classes in the filter-Only DbContext have aquery filter.
* CHECKS: With an evolving application some features are left for later. I often add small fail-safe checks in the original design to make sure any new features follow the original design approach. B1. USE DOMAIN-DRIVEN DESIGN (DDD) DATABASE CLASSES DDD teaches us that each entity (a class in .NET world) should contain the code to create and update itself and its aggregates. Furthermore, it should stop any other external code from being able to bypass theses methods so that you are forced to use the methods inside the class. I really like this because it means there is one, and only one, method you can call to get certain jobs done. The effect is I can “lock down” how something is done and make sure everyone uses the correct methods. Below the TenantBase abstractclass
which all the tenant classes inherit from showing the MoveToNewParent method that moves a tenant to another parent, for instance moving a RetailOutlet to a different SubGroup. public abstract class TenantBase : IDataKey{
private HashSet// Relationships
public int? ParentItemId { get; private set; } public TenantBase Parent { get; private set; } public IEnumerable// public methods
public void MoveToNewParent(TenantBase newParent, DbContext context){
void SetKeyExistingHierarchy(TenantBase existingTenant){
existingTenant.SetDataKeyFromHierarchy(); if (existingTenant.Children == null) context.Entry(existingTenant).Collection(x => x.Children).Load(); if (!existingTenant._children.Any())return;
foreach (var tenant in existingTenant._children){
SetKeyExistingHierarchy(tenant);}
}
//… various validation checks removed Parent._children?.Remove(this); Parent = newParent; //Now change the data key for all the hierarchy from this entry down SetKeyExistingHierarchy(this);}
//… other methods/constructors left out}
The things to note are: * Line 3: The Children relationship is held a private field which cannot be altered by outside code. It can be read via the IEnumerableto keep secret.
My solution was to create a separate DbContext for the multi-tenant classes, with a different (but overlapping) DbContext for the extra authorization classes (see the diagram below as to what this looks like). The effect is to make a multi-tenant DbContext which any contains the filtered multi-tenant data. For a developer makes it clear what classes you can access when using multi-tenant DbContext. _NOTE: HAVING MULTIPLE DBCONTEXTS WITH A SHARED TABLE CAN MAKE DATABASE MIGRATIONS A BIT MORE COMPLICATED. HAVE A LOOK AT MY ARTICLE “HANDLING ENTITY FRAMEWORK CORE DATABASE MIGRATIONS IN PRODUCTION”
FOR DIFFERENT WAYS TO HANDLE MIGRATIONS._ B3. USING UNIT TESTS TO CHECK YOU HAVEN’T MISSED ANYTHING With feature and data authorization I add unit tests that check I haven’t left a “hole” in my security. Because I have a DbContext specifically for the multi-tenant data I can write a test to check that every class mapped to the database has a Query Filter applied to it. Here is the code I use for that. public void CheckQueryFiltersAreAppliedToEntityClassesOk(){
//SETUP
var options = SqliteInMemory.CreateOptions{
var entities = context.Model.GetEntityTypes().ToList();//ATTEMPT
var queryFilterErrs = entities.CheckEntitiesHasAQueryFilter().ToList();//VERIFY
queryFilterErrs.Any().ShouldBeFalse(string.Join('\n', queryFilterErrs));}
}
public static IEnumerable{
foreach (var entityType in entityTypes){
if (entityType.QueryFilter == null && entityType.BaseType == null && //not a TPH subclassentityType.ClrType
.GetCustomAttributeentityType.ClrType
.GetCustomAttributeyield return
$"The entity class {entityType.Name} does not have a query filter";}
}
B4. ADDING FAIL-SAFE CHECKS You need to be careful of breaking the Yagni (You Aren’t Gonna Need It) rule, but a few fail-safe checks on security stuff makes me sleep better at night. Here are the two small things I did in this example which will cause an exception if the DataKey isn’t set properly. Firstly, I added a attribute to the DataKey property (see below) which tells the database that the DataKey cannot be null. This means if my code fails to set a DataKey then the database will returna constraint error.
//This means SQL will throw an error if we don't fill it in public string DataKey { get; private set; } My second fail-safe is also to do with the DataKey, but in this case I’m anticipating a future change to the business rules that could cause problems. The current business rules say that only the users that are directly linked to a RetailOutlet can create new Stock or Sales entries, but what happens if (when!) that business rule changes and divisional managers can create items in a RetailOutlet. The divisional managers don’t have the correct DataKey, but a new developer might miss that and you could “lose” data. My answer is to add a safely-check to the retail outlet’s DataKey. A retail outlet has a slightly different DataKey format – it ends with a * instead of a |. That means I can check a retail outlet format DataKey is used in the SetShopLevelDataKey and throw an exception if it’s not in the right format. Here is my code that catches thispossible problem.
public void SetShopLevelDataKey(string key){
if (key != null && !key.EndsWith("*")) //The shop key must end in "*" (or be null and set elsewhere) throw new ApplicationException( "You tried to set a shop-level DataKey but your key didn't end with *");DataKey = key;
}
This is a very small thing, but because I know that change is likely to come and I might not be around it could save someone a lot of head scratching working out why data doesn’t end up in the right place.CONCLUSION
Well done for getting to the end of this long article. I could have made the article much shorter if I only dealt with the parts on how to implement data authorization, but I wanted to talk about how handling security issues should affect the way you build your application (what I refer to as have a “robust architecture”). I have to say that the star feature in my data authorization approach is EF Core’s Query Filters. These
Query Filters cover ALL possible EF Core based queries with no exceptions. The Query Filters are the cornerstone of data authorization approach which I then add a few more features to manage user’s DataKeys and do clever things to handle the hierarchical features my client needed. While you most likely don’t need all the features I included in this example it does give you a look at how far you can push EF Core to get what you want. If you need a nice, simple data authorization example, then please look at the Part 2 article “Handling data authorization in ASP.NET Core and Entity Framework Core”
which has a personal data example which uses the User’s Id as the DataKey. Categories .NET Core, ASP.NET
Core ,
Entity Framework
3
Comments
PART 3: A BETTER WAY TO HANDLE ASP.NET CORE AUTHORIZATION – SIXMONTHS ON
Last Updated: September 28, 2019 | Created: July 2, 2019 About six months ago I wrote the article “A better way to handle authorization in ASP.NET Core”
which quickly became the top article on my web site, with lots of comments and questions. I also gave a talk at the NDC Oslo 2019conference
on the same topic which provided even more questions. In response to all the questions I have developed a new, improved example application, called PermissionAccessControl2(referred to
as “version 2”) with series of new articles that cover thechanges.
The version 2 articles which cover improvements/changes based on questions and feedback. * Part 3: A better way to handle authorization – six months on(THIS ARTICLE).
* Part 4: Building robust and secure data authorization with EF Core.
* Part 5: A better way to handle authorization – refreshinguser’s claims
.
* Part 6: Adding user impersonation to an ASP.NET Core webapplication
* Part 7: Implementing the “better ASP.NET Core authorization”code in your app
UPDATE: SEE MY NDC OSLO 2019 TALK WHICH COVERS THESETHREE ARTICLES.
Original articles:
* A better way to handle authorization in ASP.NET Core – original article. * Handling data authorization in ASP.NET Core and Entity FrameworkCore.
_NOTE: SOME PEOPLE HAD PROBLEMS USING THE CODE IN THE ORIGINAL WEB APPLICATION (REFERRED TO A VERSION 1) IN THEIR APPLICATIONS, MAINLY BECAUSE OF THE USE OF IN-MEMORY DATABASES. THE VERSION 2 EXAMPLE CODE IS MORE ROBUST AND SUPPORTS REAL DATABASES (E.G. SQL SERVER)._ TL;DR; – SUMMARY (AND LINKS TO SECTIONS) * This article answers comments/questions raised by the first version and covers the following changes from the original feature authorization article.
IF YOU ARE NEW TO THIS TOPIC THE ORIGINAL ARTICLE IS A BETTER PLACE TO START AS IT EXPLAINS THE WHOLE SYSTEM. * The changed parts covered in this article are: * : Since the first article I have found a much simpler way to setup the users claims on login if you don’t need a refresh of the usersclaims.
* ROLES: In the original article I used ASP.NET Core’s identity _Roles_ feature, but that adds some limitations. The version 2 example app has its own UserToRole class. * USING PERMISSIONS IN THE FRONT-END: I cover how to use Permissions in Razor pages and how send the permissions to a JavaScript type front-end such as AngularJS, ReactJS etc. * PROVIDES A SUPERUSER: Typically, a web app needs a user with super-high privileges to allow them set up a new system. I have added that to the new example app. * The version 2 example app is a super-simple SaaS (Software as a Service) application which provides stock and sales management to companies with multiple retail outlets. SETTING THE SCENE – MY FEATURE/DATA AUTHORIZATION APPROACH As I explained in the first article in this series,
I was asked to build a web application that provided a service to various companies that worked in the retail sector. The rules about what the users could access were quite complex and required me to go beyond the out-of-box features in ASP.NET Core. Also, the data was multi-tenant and hierarchical, i.e. each company’s data must be secured such that a user the data they are allowed to access. Many (but not all) of the issues I solved for my client are generally useful so, with the permission of my client, I worked on an open-source example application to capture the parts of the authentication and authorization features that I feel are generally useful to other developers. _If you not familiar with ASP.NET’s authorization and authentication features, then I suggest you read the __”Setting the Scene”section_
_
in the first article._ Below is a figure showing the various parts/stages in my approach to feature/data authorization. I use the ASP.NET Core authentication part, but I replace the authorization stage with my own code. My replacement authorization stage provides a few extra features over ASP.NET Core Role-based authorization. * I use ROLES to represent the type of user that is accessing the system. Typical Roles in a retail system might be SalesAssistant, Manager, Director, with other specific-job Roles like FirstAider,KeyHolder.
* In the ASP.NET Core code I use something I call PERMISSIONS, which represent a specific “use case”, such as CanProcessSale, CanAuthorizeRefund. I place a single Permission on each ASP.NET Core pages/web APIs I want to protect. For instance, I would add CanProcessSale Permission to all the ASP.NET Core pages/web APIs than are used in the process sale use case. * The DataKey is more complex because of the hierarchical data and I cover all the bases on how to be sure that this multi-tenant system is secure – see see part 4 article.
There are other parts that I am not going to cover in this article as they are covered in other articles in this series. They are * Why using Enums to implement Permissions was a good idea (see thissection
in first article).
* How I managed paid-for features (see this section in the first article). * How I setup and use a DataKey for segregate the data – see thepart 4 article
.
What the rest of this article does is deal with improvements I have made in the version 2 example application.
A SIMPLER WAY TO ADD TO THE USER’S CLAIMS My approach relies on me adding some claims to the User, which is of type ClaimsPrincipal.
In the original article I did this by using an Event inApplicationCookie
,
mainly because that is what I had to use from my client. While that works I have found a much simpler way that adds the claims on login. This approach is much easier to write, works for Cookies and Tokens, and is more efficient. Thanks to https://korzh.com/blogs/net-tricks/aspnet-identity-store-user-data-in-claims for writing about this feature. We do this by implementing a UserClaimsPrincipalFactory and registering it as a service. Here is my implementation of the UserClaimsPrincipalFactory. public class AddPermissionsToUserClaims : UserClaimsPrincipalFactory{
private readonly ExtraAuthorizeDbContext _extraAuthDbContext; public AddPermissionsToUserClaims(UserManager{
_extraAuthDbContext = extraAuthDbContext;}
protected override async Task{
var identity = await base.GenerateClaimsAsync(user); var userId = identity.Claims .SingleOrDefault(x => x.Type == ClaimTypes.NameIdentifier).Value; var rtoPCalcer = new CalcAllowedPermissions(_extraAuthDbContext); identity.AddClaim(new Claim( PermissionConstants.PackedPermissionClaimType, await rtoPCalcer.CalcPermissionsForUser(userId))); var dataKeyCalcer = new CalcDataKey(_extraAuthDbContext); identity.AddClaim(new Claim( DataAuthConstants.HierarchicalKeyClaimName, dataKeyCalcer.CalcDataKeyForUser(userId)));return identity;
}
}
You can see on line 17 I get the original claims by calling the base GenerateClaimsAsync. I then use the UserId to calculate the Permissions and the DataKey, which I add to the original claims. After this method has finished the rest of the login code will build the Cookie or Token for the user. To make this work you need to register in the Configure method inside the Startup code using the following code: services.AddScopedARTICLE
I SHOW WAYS TO REFRESH THE USER’S CLAIMS WHEN THE ROLES/PERMISSIONS HAVE BEEN CHANGED BY AN ADMIN PERSON. _ ADDING OUR OWN USERTOROLE CLASS In the version 1 example app I used ASP.NET Core’s Role/RoleManger for a quick setup. But in a real application I wouldn’t do that as, I only want ASP.NET Core’s Identity system to deal with the authentication part. The main reason for providing my own user to role class is because you can then do away with ASP.NET Core’s built-in identity database if you are using something like the OAuth 2 API (which my client system used). Also, if you are splitting the authorization part from the authentication part, then it makes sense to have the User-to-Roles links with all the other classes in the authentication part. _NOTE: THANKS TO OLIVIER OSWALD FOR HIS COMMENTS WHERE HE ASKED WHY I USED THE ASP.NET CORE’S ROLE SYSTEM IN THE FIRST VERSION. I WAS JUST BEING A BIT LAZY, SO IN VERSION 2 I HAVE DONE IT PROPERLY._ In the version 2 example app I have my own UserToRole class, as shownbelow.
public class UserToRole{
private UserToRole() { } //needed by EF Core public UserToRole(string userId, RoleToPermissions role){
UserId = userId;
Role = role;
}
public string UserId { get; private set; } public string RoleName { get; private set; } public RoleToPermissions Role { get; private set; } //… other code left out}
The UserToRole class has a primary key which is formed from both the UserId and the RoleName (known as a “composite key”). I do this to stop duplicate entries linking a User to a Role, which would make managing the User’s Role more difficult. I also have a method inside the UserToRole class called AddRoleToUser.
This adds a Role to a User with check as adding a duplicate Role to a User will cause a database exception, so I catch that early and sent a user-friendly error message to the user. USING PERMISSIONS IN THE FRONT-END The authorization side of ASP.NET returns HTTP 403 (forbidden) if a user isn’t allowed access to that method. But to make a better experience for a user we typically want to remove any links, buttons etc. that the user isn’t allowed to access. So how do I do that with my Permissions approach? Here are the two ways you might be implementing your front-end, andhow to handle each.
1. WHEN USING RAZOR SYNTAX If you are using ASP.NET Core in MVC modeor Razor
Page mode ,
then you can use my extension method called UserHasThisPermission. The code below comes from the version 2 _layout.cshtml file and controls whether the Shop menu appears, and what sub-menu itemsappear.
@if (User.UserHasThisPermission(Permissions.SalesRead)){
}
You can see me using the UserHasThisPermission method on lines 1, 12and 17.
2. WORKING WITH A JAVASCRIPT FRONT-END FRAMEWORK In some web applications you might use a JavaScript front-end framework such as AngularJS, ReactJS etc. to manage the front-end. In that case you need to pass the Permissions to your front-end system so that you can add code to control what link/buttons are shown to theuser.
It is very simple to get access to the current user’s Permissions via the HttpContext.User variable which is available in any controller. I do this via a Web API and here is the code from my FrontEndController in my version 2 application.
public IEnumerable{
var packedPermissions = HttpContext.User?.Claims.SingleOrDefault( x => x.Type == PermissionConstants.PackedPermissionClaimType); return packedPermissions?.Value .UnpackPermissionsFromString() .Select(x => x.ToString());}
This action returns an array of Permission names. Typically you would call this after a login and store the data in a SessionStoragefor use
while the user is logged in. You can try this in the version 2 application – run the application locally and go to http://localhost:65480/swagger/index.html. Swagger will then display the FrontEnd API which has one command to get the user’spermissions.
_NOTE: IN PART 5
,
WHERE THE PERMISSIONS CAN CHANGE DYNAMICALLY I SHOW A WAY THAT THE FRONT-END CAN DETECT THAT THE PERMISSIONS HAVE CHANGED SO THEY CAN UPDATE THEIR LOCAL VERSION OF THE PERMISSIONS._ ENABLING A SUPERADMIN USER In most web application I have built you need one user that has access to every part of the system – I call this user SuperAdmin. And typically, I have some code that will make sure there is a SuperAdmin user in any system that the application runs. That way you can run the app with a new database and then use the SuperAdmin user to set up all the other users you need. A logged in SuperAdmin User needs all the Permissions so that they can do anything, but that would be hard to keep updated as new permissions are added. Therefore, I added a special Permission called AccessAll and altered the UserHasThisPermission extension method to return true if the current user has the AccessAll Permission. public static bool UserHasThisPermission( this Permissions usersPermissions, Permissions permissionToCheck){
return usersPermissions.Contains(permissionToCheck) || usersPermissions.Contains(Permissions.AccessAll);}
Now we have the concept of a SuperAdmin we need a way to create the SuperAdmin user. I do this via setup code in the Program class. This method makes sure that the SuperAdmin user is in the current user database, i.e. it adds a SuperUser if . Note: I am using the C# 7.1’s Main Async feature to run my startup code. public static async Task Main(string args){
(await BuildWebHostAsync(args)).Run();}
public static async Task{
var webHost = WebHost.CreateDefaultBuilder(args) .UseStartup.Build();
//Because I might be using in-memory databases I need to make //sure they are created before my startup code tries to use them SetupDatabases(webHost); await webHost.Services.CheckAddSuperAdminAsync(); await webHost.Services.CheckSeedDataAndUserAsync();return webHost;
}
The CheckAddSuperAdminAsync obtains the SuperAdmin Email and password from the appsetting.json file (see Readme file for more information). _NOTE: THE SUPERADMIN USER IS VERY POWERFUL AND NEEDS TO BE PROTECTED. USE A LONG, COMPLICATED PASSWORD AND MAKE SURE YOU PROVIDE THE SUPERADMIN EMAIL AND PASSWORD BY OVERRIDING THE APPSETTINGS.JSON VALUES ON DEPLOYMENT. MY STARTUP CODE ALSO DOESN’T ALLOW A NEW SUPERADMIN USER TO BE ADDED IF THERE IS A USER ALREADY IN THE DATABASE THAT HAS THE SUPERADMIN ROLE. THAT STOPS SOMEONE ADDING A NEW SUPERADMIN WITH THEIR OWN EMAIL ADDRESS ONCE A SYSTEM IS LIVE._ OTHER THINGS NEW IN VERSION 2 OF THE EXAMPLE APPLICATION The PermissionAccessControl2application is
designed to work like a real application, using both feature and data authorization. The application pretends to be a super-simple retail sales application service for multiple companies, i.e. a multi-tenant application. I cover the multi-tenant data authorization in part 4.
The differences in version 2 of version 1 that I have not alreadymentioned are:
* The code will work with either in-memory databases or SQL Server databases, i.e. it will check if users and data is already present and is so won’t try to add the user/data again * You can choose various application setups, such as database type and simple/complex claims setup. This is controlled by data in the appsettings.json file – see Readme file for more information on this. * The multi-tenant data access is hierarchical, with much more complex and robust than in version 1 – see Part 4: Building a robust and secure data authorization with EF Corefor more on this.
* There are some unit tests. Not a lot, but enough to give you an idea of what is happening.CONCLUSION
The first article on my approach authorization in ASP.NET Core has been very popular, and I had great questions via my blog and at my NDC Oslo talk. This caused me to build a new version of the example app, available via a GitHub repo, with many
improvements and some new articles that explains the changes/improvements. In this article (known as Part 3) I focused on the ASP.NET Core authorization part and provided a few improvements over the version 1 application and added explanations of how to use these features. In article Part 4 I cover data authorization with EF Core, and the Part 5 article covers the complex area of updating a user’s permissions/data key dynamically. Hopefully the whole series, with the two example applications, will help you design and build your own authorization systems to suit yourneeds.
Categories ASP.NET Core, Entity
Framework
18
Comments
GETTING BETTER DATA FOR UNIT TESTING YOUR EF CORE APPLICATIONS Last Updated: June 5, 2019 | Created: June 5, 2019 I was building an application using Entity Framework Core (EF Core) for a client and I needed to set up test data to mimic their production system. Their production system was quite complex with some data that was hard to generate, so writing code to set up the database was out of the question. I tried using JSON files to save data and restore data to a database and after a few tries I saw a pattern and decided to build a library to handle this problem. My solution was to serialize the specific data from an existing database and store it as a JSON file. Then in each unit test that needed that data I deserialized the JSON file into classes and use EF Core to save it to the unit test’s database. And because I was copying from a production database which could have personal data in it I added a feature that can anonymize personal data so that no privacy laws, such as GDPR, were breached. Rather than writing something specific for my client I decided to generalise this feature and add it to my open-source libraryEfCore.TestSupport
. That way I, and
you, can use this library to help you with create better test data for unit/performance tests, and my client gets a more comprehensive library at no extra cost.TL; DR; – SUMMARY
* The feature “Seed from Production” allows you to capture a “snapshot” of an existing (production) database into a JSON file which you can use to recreate the same data in a new database for testing your application. * This feature relies on an existing database containing the data you want to use. This means it only works for updates or rewrites of an existing application. * The “Seed from Production” is useful in the following cases. * For tests that need a lot of complex data that is hard tohand-code.
* It provides much more representative data for system tests, performance tests and client demos. * The “Seed from Production” feature also includes an anonymization stage so that no personal data is stored/used in your application/test code. * The “Seed from Production” feature relies on EF Core’s excellent handing of saving classes with relationships to thedatabase.
* The “Seed from Production” feature is part of my open-sourcelibrary TestSupport
(Version 2.0.0
and above).
SETTING THE SCENE – WHY DO I NEED BETTER UNIT TEST DATA? I spend a lot of time working on database access code (I wrote the book “Entity Framework Core in Action ”) and I often have to work with large sets of data (one client’s example database was 1Tbyte in size!). If you use a database in a test, then the database must be in a known state before you start you test. In my experience what data the unit test’s needs break downinto three cases:
* A small number of tests you can start with an empty database. * Most tests only need few tables/rows of data in the database. * A small set of tests need more complex set of data in thedatabase.
Item 1, empty database, is easy to arrange: either delete/create a new database (slow) or build a wipe method to “clean” an existing database (faster). For item 2, few tables/rows, I usually write code to set up the database with the data I need – these are often extension methods with names like SeedDatabaseFourBooks or CreateDummyOrder. But when it comes to item 3, which needs complex data, it can be hard work to write and a pain to keep up to date when the database schema or data changes. I have tried several approaches in the past (real database and run test in a transaction, database snapshots, seed from Excel file, or just write tests to handle unknown data setup). But EF Core’s excellent approach to saving classes with relationships to a database allows me to produce a better system. Personally, I have found having data that looks real makes a difference when I am testing at every level. Yes, I can test with the book entitled “Book1” with author “Author1”, but having a book named “Domain-Driven Design” with author “Eric Evans” is easier to spot errors in. Therefore, I work to produce “real looking” data when I am testing or demonstrating an application to aclient.
One obvious limitation of the “Seed from Production” approach is you do need an existing database that contains data you can copy from! Therefore, this works well when you are updating or extending an existing application. However, I have also found this useful when building a new application as the development will (should!) soon produce pre-production data that you can use. > NOTE: Some people would say you shouldn’t be accessing a database > as part of a unit test, as that is an integration test. I understand > their view and in some business logic I do replace the database > access layer with an interface (see this section>
> in my article about business logic). However, I am focused on > building things quickly and I find using a real database makes it > easier/quicker to write tests (especially if you can use an > in-memory database> )
> which means my unit test also checks that my database > relationships/constraints work too. HOW MY “SEED FROM PRODUCTION” SYSTEM WORKS When I generalised the “seed from production” system I listed whatI needed.
* A way to read data from another database and store in a file. That way the “snapshot” data becomes part of your unit test code and is cover by source control. * The data may come from a production database that contains personal data. I need a way to anonymise that data before its saved toa file.
* A way to take the stored “snapshot” and write it back out to a new database for use in tests. * I also needed the option to alter the “snapshot” data before it was written to the database for cases where a particular unit test needed a property(s) set to a specific value. * Finally, I need a system that made updating the “snapshot” data easy, as the database schema/data is likely to change often. My “Seed from Production” feature handles all these requirements by splitting the process into two parts: an extract part, which is done once, and the seed part, which runs before each test to setup the database. The steps are: * Extract part – only run if database changes. * You write the code to read the data you need from the productiondatabase data.
* The DataResetter then: * Resets the primary keys and foreign keys to default values so that EF Core will create new versions in the test database. * Optionally you can anonymise specific properties that need privacy, e.g. you may want to anonymise all the names, emails,addresses etc.
* Converts the classes to a JSON string. * Saves this JSON string to a file, typically in you unit testproject.
* Seed database part – run at start of every unit test * You need to provide an empty database for the unit test. * The DataResetter reads the JSON file back into classes mapped tothe database.
* Optionally you can “tweak” any specific data in the classes that your unit test need * Then you add the data to the database and call SaveChanges. That might sound quite complicated but most of that is done for you by the library methods. The diagram below shows the parts of the two stages to make it clearer – the parts shown as orange are the parts you need to write while the parts in blue are provided by the library.SHOW ME THE CODE!
This will all make more sense when you see the code, so in the next subsections I show you the various usage of the “Seed from Production” library. They are: * Extract without no anonymisation. * Extract showing an example of anonymisation. * Seed a unit test database, with optional update of the data. * Seed database when using DDD-styled entity classes. In all the examples I use my book app database which I used the book I wrote for Manning, “Entity Framework Core in Action ”. The book app “sells” books and therefore the database contains books, with authors, reviews and possible price promotions – you can see this in theDataLayer/BookApp
folder of my EfCore.TestSupportGitHub project.
> NOTE: You can see a live version of the book app at > http://efcoreinaction.com/ 1. EXTRACT WITHOUT NO ANONYMISATION I start with extracting data from a database stage, which only needs to be run when the database schema or data has changed. To make it simple to run I make it a unit test, but I define that test in such a way that it only runs in debug mode (that stops it being run when yourun all the tests).
public void ExampleExtract(){
var sqlSetup = new SqlServerProductionSetup{
//1a. Read in the data to want to seed the database with var entities = context.Books .Include(x => x.Reviews) .Include(x => x.Promotion) .Include(x => x.AuthorsLink) .ThenInclude(x => x.Author).ToList();
//1b. Reset primary and foreign keys var resetter = new DataResetter(context); resetter.ResetKeysEntityAndRelationships(entities); //1c. Convert to JSON string var jsonString = entities.DefaultSerializeToJson(); //1d. Save to JSON local file in TestData directory sqlSetup.DatabaseName.WriteJsonToJsonFile(jsonString);}
}
The things to note are: * Line 1: I use the RunnableInDebugOnly attribute (available in my EfCore.TestSupport library) to stop the unit test being run in a normal run of the unit tests. This method only needs to be run if the database scheme or data changes. * Line 4: the SqlServerProductionSetup class takes the name of a connection in the appsetting.json file and sets up the options for the given DbContext so that you can open it. * Line 9 to 14: Here I read in all the books with all their relationships that I want to save. * Lines 17 and 18: In this case the Resetter resets the primary keys and foreign keys to their default value. You need to do this to ensure EF Core works out the relationships via the navigational properties and creates new rows for all the data. * Line 21: This uses a default setting for Newtonsoft.Json’s SerializeObject method. This works in most cases, but you can write your own if you need different settings. * Line 23: Finally, it writes the data in to a file in the TestData folder of your unit tests. You can supply any unique string which is used as part of the file name – typically I use the name of the database it came from, which the SqlServerProductionSetup classprovides.
2. EXTRACT SHOWING AN EXAMPLE OF USING ANONYMISATION As I said before you might need to anonymise names, emails, addresses etc. that were in your production database. The DataResetter has a simple, but powerful system that allows you to define a series of properties/classes that need anonymising. You define a class and a property in that class to anonymise and the DataResetter will traverse the whole sequence of relationships and will reset every instance of the class+property. As you will see you can define lots of classes/properties to be anonymised. The default anonymisation method uses GUIDs as strings, so the name “Eric Evans” would be replaced with something like “2c7211292f2341068305309ff6783764”. That’s fine but it’s not that friendly if you want to do a demo or testing in general. That is why I provide a way to replace the default anonymisation method, which I show in the example (but you don’t have to if GUID strings are OKfor you).
The code below is an example of what you can do by using an external library to provide random names and places. In this implementation I use the DotNetRandomNameGeneratorNuGet
package and create a few different formats you can call for, such as FirstName, LastName, FullName etc. public class MyAnonymiser{
readonly PersonNameGenerator _pGenerator; public MyAnonymiser(int? seed = null){
var random = seed == null ? new Random() : new Random((int)seed); _pGenerator = new PersonNameGenerator(random);}
public string AnonymiseThis(AnonymiserData data, object objectInstance){
switch (data.ReplacementType){
case "FullName": return _pGenerator.GenerateRandomFirstAndLastName(); case "FirstName": return _pGenerator.GenerateRandomFirstName(); case "LastName": return _pGenerator.GenerateRandomLastName(); case "Email": //… etc. Add more versions as needed default: return _pGenerator.GenerateRandomFirstAndLastName();}
}
}
The things to note are: * Lines 5 to 9: I designed my anonymiser to take an optional number to control the output. If no number is given, then the sequence of names has a random start (i.e. it produces different names each time it is run). If a number is given, then you get the same random sequence of names every time. Useful if you want to check propertiesin your unit test.
> NOTE: See the Seed from Production Anonymization>
> documentation link on the AnonymiserFunc and its features. There are > several pieces of information I have not described here. The code below shows an extract method, which is very similar to the first version, but with some extra code to a) link in the MyAnonymiser, and b) defines the class+property that needs to beanonymised.
public void ExampleExtractWithAnonymiseLinkToLibrary(){
var sqlSetup = new SqlServerProductionSetup{
//1a. Read in the data to want to seed the database with var entities = context.Books .Include(x => x.Reviews) .Include(x => x.Promotion) .Include(x => x.AuthorsLink) .ThenInclude(x => x.Author).ToList();
//1b-ii. Set up resetter config to use own method var myAnonymiser = new MyAnonymiser(42); var config = new DataResetterConfig{
AnonymiserFunc = myAnonymiser.AnonymiseThis};
//1b-ii. Add all class/properties that you want to anonymise config.AddToAnonymiseList}
}
The things to note are: * Line 17: I create MyAnonymiser, and in this case I provide a see number. This means the same sequence of random names will be created whenever the extract is run. This can be useful if you access the anonymised properties in your unit test. * Lines 18 to 21: I override the default AnonymiserFunc by creating a DataResetterConfig class and setting the AnonymiserFunc properly to my replacement AnonymiserFunc from my MyAnonymiser class. * Lines 23 and 24: I add two class+property items that should be anonymised via the AddToAnonymiseListAnonymiserFunc.
All the rest of the method is the same. > NOTE: The ResetKeysEntityAndRelationships method follows all the > navigational links so every instance of the given class+property > that is linked to the root class will be reset. It also uses > Reflection, so it can anonymise properties which have private> setters.
3. SEED A UNIT TEST DATABASE FROM THE JSON FILE Now I show you a typical unit test where I seed the database from the data stored in the JSON file. In this case using a Sqlite in-memory, which is very fast to setup and run (see my article “Using in-memory databases for unit testing EF Core applications”
for when and how you can use this type of database for unit testing). public void ExampleSeedDatabase(){
//SETUP
var options = SqliteInMemory.CreateOptions{
//2a. make sure you have an empty database context.Database.EnsureCreated(); //2b. read the entities back from the JSON file var entities = "ExampleDatabase".ReadSeedDataFromJsonFile- >(); //2c. Optionally “tweak” any specific data in the classes that your unit test needs entities.First().Title = "new title"; //2d. Add the data to the database and save context.AddRange(entities); context.SaveChanges();
//ATTEMPT
//... run your tests here//VERIFY
//... verify your tests worked here}
}
The things to note are: * Line 9. In this case I am using an in-memory database, so it will be empty. If you are using a real database, then normally clear the database before you start so that the unit tests as a “known” starting point. _(Note that you don’t have to clear the database for the seed stage to work – it will just keep adding a new copy of the snapshot every time, but your unit test database will grow overtime.)_
* Line 11: my ReadSeedDataFromJsonFile extension method reads the json file with the reference “ExampleDatabase” (which was the name of the database that was imported – see extract code) and uses Newtonsoft.Json’s DeserializeObject method to turn the JSON back into entity classes with relationships. * Line 13: Optionally you might need to tweak the data specifically for the test you are going to run. That’s easy as you have access to the classes at this stage. * Line 15. You use Add, or AddRange if it’s a collection, to Add the new classes to the database. * Line 16. The last step is to call SaveChanges to get all the entities and relationships created in the database. EF Core will follow all the navigational links, like Reviews, to work out what is linked to what and set up the primary keys/foreign keys as required. * Lines 19 onward. This is where your tests and asserts go in yourunit test.
4. SEED DATABASE WHEN USING DDD-STYLED ENTITY CLASSES If you have read any of my other articles you will know I am a great fan of DDD-styled entity classes (see my article “Creating Domain-Driven Design entity classes with Entity Framework Core”
for more about this). So, of course, I wanted the Seed from Production feature to work with DDD-styled classes, which it does now, but you do need to be careful so here are some notes about things. Problem occur if Newtonsoft.Json can’t find a way to SET A PROPERTY AT SERIALIZATION TIME. This fooled me for quite a while (see the issue I raised on Newtonsoft.JsonGitHub). The
solution I came up with was adding a setting to the serialization (and deserialization) that tells Newtonsoft.Json that it can set via private setters. This works for me (including private fields mapped to IEnumerable), but in case you have a more complex state there are other ways to set this up. The most useful is to create a private constructor with parameters that match the properties by type and name, and then place a attribute on that constructor (there are other ways too – look at Newtonsoft.Json docs). > NOTE: The symptoms of Newtonsoft.Json failing to serialize because > it can’t access a property aren’t that obvious. In one case > Newtonsoft.Json threw an unexplained “Self referencing loop > detected” exception. And when I changed the JsonSerializerSettings > to ignore self-referencing loops it incorrectly serialized the data > by adding a duplicate (!). You can see the gory details in the issue > I raised on Newtonsoft.Json> .
ASIDE – HOW DOES THIS SEEDING WORK? > NOTE: This is an optional section – I thought you might like to > learn something about EF Core and how it handles classes with > relationships (known as “navigational links” in EF Core). You may be wondering how this seeding feature works – basically it relies on some magic that EF Core performs. * Firstly, EF Core must work out what State a class is in, e.g. is it an _Added _(new) entry or an existing (_tracked_) entry, which tells it whether it needs to create a new entry or just refer to an existing entry in the database. This means EF Core sets the state of the class instance coming from the json as _Added_ and will write them out tot the database. * The second part is how EF Core works out the relationships between each class instance. Because the ResetKeysEntityAndRelationships method reset the foreign keys (and the primary keys) EF Core relies on the navigational properties to work out the relationships between eachclass instance.
These two things mean that the database is updated with the correct data and foreign keys, even if the relationships are complex. This is feature makes EF Core so nice to work with not just in this feature but for any adding or updating of linked classes. Here is a simple example taking from one of my talks with the code and two diagrams showing you the before and after. In this case I create a new book, with a new many-to-many BookAuthor to an existing Author. var existingAuthor = context.Authors.First(); var newBook = new Book { Title = “New Book”}; newBook.AuthorLinks = new List{
new BookAuthor
{
Book = newBook,
Author = existingAuthor,Order = 1
};
}
After that the classes look like this: note red classed are new, while blue have been read from the database (i.e. are tracked by EF Core).Then
we save this to the database with the following code. context.Add(newBook); context.SaveChanges(); After that the classes would look like this, with all the primary and foreign keys filled in and all navigational links set up. EF Core has done the following to make this work: * During the Add stage is sets up all the navigation links and copies the primary keys of existing instances into the correct foreign keys (in this case the Author’s existing primary key into the AuthorId foreign key in the BookAuthor class). * In the SaveChanges part it does the following within a transaction * It inserts a new row for this Book, which sets its primary key * It then copies the Book’s primary key into the BookId foreign key in the BookAuthor class. * It then inserts a new row for the BookAuthor class. This makes handling linked classes so simple in EF Core.CONCLUSION
As I said earlier I had tried over the years different ways to set up complex databases, both with ADO.NET and EF6.x. EF Core has a number of features (good access to the database schema and better handling of adding linked classes to a database) which now make this much easier to implement a generic solution to this problem. For my client the “Seed from Production” feature works really well. Their database contains data that is hard to create manually, and a pain to keep up to date as the application grows. By copying a database set up by the existing application we captured the data to use in our unit tests and some performance tests too. Also, the test data becomes part of the test code and therefore covered by the standard source control system. Another bonus is it makes it simple to run any tests in a DevOps pipeline as the test databases as it can be created and seeded automatically, which saves use from having to have specific database available in the DevOps path. You won’t need this feature much, as most unit tests should use very basic data, but for those complex systems where setting up the database is complicated then this library can save you (and me) lots of time. And with anonymisation stage happening before the json file is created you don’t have to worry about having personal data inyour unit tests.
Happy coding!
Categories .NET Core, Entity
Framework
0
Comments
GENERICSERVICES DESIGN PHILOSOPHY + TIPS AND TECHNIQUES Last Updated: May 22, 2019 | Created: April 3, 2019 I read Jimmy Bogard’s article called “AutoMapper’s DesignPhilosophy
”, which he
wrote to help people understand what job AutoMapper was designed to do. This sort of article helps people who get frustrated with AutoMapper because they are trying to use it to do something it wasn’t really designing to do. I use AutoMapper in my libraries and I was glad to see my usage is right in line with what AutoMapper is designed to do. I thought I would write a similar article for my GenericServiceslibrary (see
summary for what GenericServices does) to help anyone who uses, or wants to use my GenericServices library (and associated libraries). While the use of my GenericServices library is tiny compared to AutoMapper (about 0.05% of AutoMapper’s downloads!) I too have issues or requests for features that don’t fit into what GenericServices’s is designed to do. Hopefully this article will help people understand my GenericServices library better, and I also add a few tips and techniques that I have found useful too. Other related articles in this series: * “GenericServices: A library to provide CRUD front-end services from a EF Core database”
– INTRODUCTION TO THE GENERICSERVICES LIBRARY. * “Creating Domain-Driven Design entity classes with Entity Framework Core” – USING GENERICSERVICES WITH DOMAIN-DRIVEN DESIGNENTITY CLASSES.
* “How to write good, testable ASP.NET Core Web API code quickly”
– EXAMPLES OF USING GENERICSERVICES WITH ASP.NET CORE WEB API. * “Pragmatic Domain-Driven Design: supporting JSON Patch in EntityFramework Core
”
– EXAMPLES OF USING JSON PATCH IN ASP.NET CORE WEB API. * “GenericServices Design Philosophy + tips and techniques” –THIS ARTICLE.
TL; DR; – SUMMARY
* The GenericServices library is designed to speed up the development of building front-end Entity Framework 6 and Entity Framework Core (EF Core) databases accesses. * GenericServices does this by automating the mapping of database classes to DTOs (Data Transfer Object, also known as a ViewModel in ASP.NET) in a way that builds efficient database accesses. * My personal experience I would say that my GenericServices library saved me 2 months of development time over a 12-month period.* GenericServices
also has a
feature where it can work with Domain-Driven Design (DDD) styled EF Core database classes.
It can find and call methods or constructors inside a DDD-styled EF Core database class. That gives very good control over creates andupdates.
* This article tells you what GenericServices can, and cannot, do.
* I then list five GenericServices tips and techniques that I use when using this library. SETTING THE SCENE – WHAT DOES GENERICSERVICES DO? _TIP: This is a shortened version of a section from the introduction article on EFcore.GenericServices.
The original has more code in it._ GenericServices is designed to make writing front-end CRUD (Create, Read, Update and Delete) EF Core database accesses much easier. It handles both the database access code and the “adaption” of the database data to what the front-end of your application needs. It does this by providing a library with methods for Create, Read, Update and Delete that uses either a EF Core database class or a DTO (Data Transfer Object, also known as a ViewModel in ASP.NET) to define what EF Core class is involved and what data needs to be read or written. I’m going to take page used to update a database class to describe the typical issues that come up. My example application is an e-commerce site selling technical books and I implement a feature where an authorised user can add a sales promotion to a book, by reducing its price. The ASP.NET Core web page is shown below with the user’s input in red and comments on the left about the Book properties and how they are involved in the update. In a web/mobile application a feature like this consists of twostages:
1. READ DATA TO DISPLAY The display to the user needs five properties taken from the Book and I use a DTO (Data Transfer Object, also known as a ViewModel) than contains the five properties I want out of the Book entity class. GenericServices uses AutoMapper to build a query using LINQ which EF Core turns into an efficient SQL SELECT command that just reads the five columns. Below is the DTO, with the empty interface ILINKTOENTITY{
public int BookId { get; set; } // Tells GenericServices not copy this back to the database public decimal OrgPrice { get; set; } //Tells GenericServices not copy this back to the database public string Title { get; set; } public decimal ActualPrice { get; set; } public string PromotionalText { get; set; }}
Below is the GenericService code that reads in data into the DTO, with the id holding the Book’s primary key (see this link for full list of all the code) var dto = _service.ReadSingle> ”
> for more on this.2. UPDATE THE DATA
The second part is the update of the Book class with the new ActualPrice and the PromotionalText. This requires a) the Book entity to be read in, b) the Book entity to be updated with the two new values, and c) the updated Book entity to be written back to the database. Below is the GenericService code that does this (see thislink
for full list of all the code) _service.UpdateAndSave(dto); Overall the two GenericService calls replaces about 15 lines of hand-written code that does the same thing. THE PROBLEM THAT GENERICSERVICES IS AIMED AT SOLVING I built GenericServices to make me faster at building .NET applications and to remove some of the tedious coding (e.g. LINQ Selects with lots of properties) around building front-end CRUD EF Core database accesses. Because I care about performance I designed the library to build efficient database accesses using the LINQ Select command, i.e. only loading the properties/columns that are needed. With the release of EF Core, I rewrote the library (GenericServices(EF6) ->
EfCore.GenericServices) and added new
features to work with a Domain-Driven Design (DDD) styled databaseclasses
.
DDD-styled database classes give much better control over how creates and updates are done. GenericServices is meant to make the simple-to-moderate complexity database reads easy to build. It can also handle all deletes and some single-class creates and updates with normal database classes, but because EfCore.GenericServicessupports
DDD-styled database classes it can call constructors/methods which can handle every type of createor update.
Overall, I find GenericServices will handle more than 60% of all front-end CRUD accesses, but with DDD-styled database classes this goes up to nearly 80%. It’s only the really complex reads/writes that can be easier to write by hand, and some of those write should really be classed as business logic anyway. The trick is to know when to use GenericServices, and when to hand-code the database access –I cover that next.
WHAT EFCORE.GENERICSERVICES CAN/CANNOT HANDLE OK, let’s get down to the details of what GenericServices can and cannot do, with a list for good/bad usages. GENERICSERVICES IS GOOD AT: * All reads that use flattening (see Note1)* All deletes
* Create/Update
* Normal (i.e. non-DDD) database classes: of a single class (seeNote2)
* DDD-styled database classes: any create/update. GENERICSERVICES IS BAD AT: * Any read that needs extra EF Core commands like .INCLUDE(), .LOAD(), etc. (see Note1)* Create/Update
* Normal (i.e. non-DDD) database classes: with relationships (seeNote2)
NOTE1: READ – FLATTEN, NOT INCLUDE. The GenericServices reads are designed for sending to a display page or a Web API, and I can normally implement any such read by using AutoMapper’s “Flattening” feature. However, sometimes the effort to set up special AutoMapper’s configurations (see docs)
can take more effort than just hand-coding the read. Don’t be afraid to build your own read queries if this simpler for you. You cannot use GenericServices for reads that needs .INCLUDE(), .LOAD(), etc. Typically that sort of read is used in business logic and I have a separate library called EfCore.GenericBizRunnerfor handling
that (see articles “A library to run your business logic when using Entity Framework Core”
and “Architecture of Business Layer working with Entity Framework(Core and v6)
”
for more about handling business logic). > NOTE: Using .INCLUDE(), LOAD() or Lazy Loading>
> is inefficient for a simple read as it means you are either loading > data you don’t need, and/or making multiple trips to the database,> which is slow.
NOTE2: CREATE/UPDATE – SINGLE OR RELATIONSHIPS When using normal (non-DDD) database classes GenericServices will only create/update a single class mapped to the database via EF Core. However, you can get around this because GenericServices is designed to work with a certain style of DDD entity classes,
i.e. GenericServices can find and call constructors or methods inside your EF Core class to do a create or update, which allows your code to handle any level of complexity of a create or update. GenericServices also gives you the option to validate the data that is written to the database (off by default – turn on via the GenericServiceConfig class).
This, coupled with DDD constructor/methods, allows you to write complex validation and checks. However, if I think the code is getting too much like business logic then I use EfCore.GenericBizRunner.
TIPS AND TECHNIQUES
Clearly really know my library very well, and I can do things other’s might not think of. This is a list of things I have do that you might find useful. Here is a list to save you scrolling down tosee what’s there.
* Try using DDD-styled entity classes with EfCore.GenericServices * Don’t try to use GenericServices for business logic databaseaccesses
* How to filter, order, page a GenericService read query * Helper library for using GenericServices with ASP.NET Core Web API * How to unit test your GenericServices code A. TRY USING DDD-STYLED ENTITY CLASSES WITH EFCORE.GENERICSERVICES Personally, I have moved over to using DDD-styled database classes with EF Core, so let me explain the differences/advantages of DDD. Non-DDD classes have properties with public setters, i.e. anyone can alter a property, while DDD-styled classes have private setters which means you must use a constructor or a method to create/update a property/ies. So DDD-styled classes “locks down” any changes so that no one can bypass the create/update code in that class (see my article “Creating Domain-Driven Design entity classes with EntityFramework Core
”
for more on this).
Yes, DDD-styled database classes do take some getting used to, but it gives you an unparallel level of control over create/update, including altering not only properties but relationships as well. EfCore.GenericServices works with DDD-styled EF Core classes and finds constructors/methods by matching the parameter name/types (see GenericServices DDD docs here).
B. DON’T TRY TO USE GENERICSERVICES FOR BUSINESS LOGIC DATABASEACCESSES
When I think about database accesses in an application I separate theminto two types:
* CRUD database accesses done by the front-end, e.g. read this, update that, delete the other. * Business logic database accesses, e.g. create an order, calculate the price, update the stock status. The two types of accesses are often different – CRUD front-end is about simple and efficient database accesses, while business logic database accesses are about rules and processes. GenericServices is designed for CRUD database for the front-end and won’t do a good job for business logic database accesses – I use my GenericBizRunnerlibrary for
that.
Sure, it can get hazy as to whether a database access is a simple CRUD access or business logic – for instance is changing the price of an item a simple CRUD update or a piece of business logic? However, there are some actions, like update the stock status which can trigger a restocking order, that are clearly business logic and should be handled separately (see my article “Architecture of Business Layer working with Entity Framework (Core and v6)”
on how I handle business logic). There are two things that GenericService + DDD-styled database classescan’t do:
* GenericServices doesn’t support async calls to methods in a DDD-styled database class. I could support it but I held off for now. If I feel I need Async I use my GenericBizRunner, which has very good async handling throughout. * The constructors/methods in a DDD-styled database class can’t easily have dependency injection added (you could, but you would be pushing the whole DDD pattern a bit to far). You might like to read my article “Three approaches to Domain-Driven Design with Entity Framework Core” and make your own mind up as to whether you want todo that.
C. HOW TO FILTER, ORDER, PAGE A GENERICSERVICE READ QUERY The EfCore.GenericServices’s READMANYNOTRACKEDaccess.
Filtering etc. after the mapping to the DTO normally covers 90% of your query manipulation but what happens if you need to filter or change a read prior to the projection to the DTO? Then you need PROJECTFROMENTITYTODTOon the
entity, say if you were using a Query Filters for soft delete. > NOTE: If you are using Query Filters then all the > EfCore.GenericServices’s methods obey the query filter, apart from > the method _DeleteWithActionAndSave_. This turns OFF any query > filters so that you can delete anything – you should provide an > action that checks the user is allowed to delete the specific entry. D. HELPER LIBRARY FOR USING GENERICSERVICES WITH ASP.NET CORE WEB CORE I use ASP.NET at lot over the years and I have generated several patterns for handling GenericServices (and GenericBizRunner), especially around Web APIs. I have now packaged these patterns into a companion library called EfCore.GenericServices.AspNetCore.
For ASP.NET MVC and Razor Pages the EfCore.GenericServices.AspNetCore has a COPYERRORSTOMODELSTATE extension method that copies GenericServices’s status into the ASP.NET Core Model so they becomevalidation errors.
The features for Web API are quite comprehensive. * GenericServices supports JSON Patch for updates – see my article “Pragmatic Domain-Driven Design: supporting JSON Patch in Entity Framework Core”
for full details of this feature. * For Web API it can turn GenericServices’s status into the correct response type, with HTTP code, success/errors parts and any result to send. This makes for very short Web API method with a clearly defined output type for Swagger – see example below public async Task{
return service.Response( await service.ReadSingleAsync}
For more on EfCore.GenericServices and ASP.NET Core Web APIs have a look at my article “How to write good, testable ASP.NET Core Web APIcode quickly
”
E. HOW TO UNIT TEST YOUR GENERICSERVICES CODE I’m a big fan of unit testing, but I also what to write my tests quickly. I therefore have built-in methods to help to unit test code that uses EfCore.GenericServices. I also have a whole library calledEfCore.TestSupport
to help with unit testing any code that uses EF Core. EfCore.GenericServices has a number of methods
that will set up the data that GenericServices would normally get via dependency injection (DI). See line 11 for one such method in the code below. The other methods, like SQLITEINMEMORY.CREATEOPTIONS on line 5, come from my EfCore.TestSupportlibrary.
public void TestProjectBookTitleSingleOk(){
//SETUP
var options = SqliteInMemory.CreateOptions{
context.Database.EnsureCreated(); context.SeedDatabaseFourBooks(); var utData = context.SetupSingleDtoAndEntities//ATTEMPT
var dto = service.ReadSingle//VERIFY
service.IsValid.ShouldBeTrue(service.GetAllErrors()); dto.BookId.ShouldEqual(1); dto.Title.ShouldEqual("Refactoring");}
}
I also added a RESPONSEDECODERS class containing a number of extension method to my EfCore.GenericServices.AspNetCore that will turn a Web API response created by that library back into its component parts. This makes testing Web API methods simpler. This link to a set of unit tests gives you an idea of how you could use the extension methods in integration testing. Also see the unit testing section of my article “How to write good, testable ASP.NET Core Web APIcode quickly”.
CONCLUSION
I hope this article helps people to get the best out of my EfCore.GenericServiceslibrary and
associated libraries like EfCore.GenericServices.AspNetCore and EfCore.GenericBizRunner. All these
libraries were built to make me faster at developing applications, and also to remove some of the tedious coding so I can get on with coding the parts that need real thought. The important section is “What EfCore.GenericServices can/cannothandle
”
which tells you what the library can and cannot do. Also note my comments on the difference between front-end CRUD (GenericServices) and business logic (GenericBizRunner). If you stay in the “sweet spot” of each of these libraries, then they will work well for you. But don’t be afraid to abandon either library and write your own code if it’s easier or clearer – pick the approach that is clear, but fast to develop. I also hope the tips and techniques will alert you to extra parts of the EfCore.GenericServices library that you might not know about. I used my libraries on many projects and learnt a lot. The list are some things I learnt to look out for and links to other libraries/techniques that help me be a fast developer.Happy coding.
Categories .NET Core, ASP.NET
Core ,
Entity Framework
,
GenericServices
0
Comments
DECODING ENTITY FRAMEWORK CORE LOGS INTO RUNNABLE SQL Created: March 20, 2019 This isn’t one of my long articles, but just a bit of fun I had trying to convert Entity Framework Core’s (EF Core) CommandExecuted logs back into real SQL. EF Core’s logging is much improved over EF6.x and it returns very readable SQL (see this example below)var id = 1;
var book = context.Books.Single(x => x.BookId == id); Produces this log output Executed DbCommand (1ms) SELECT TOP(2) ., ., ., ., ., .
, ., .
FROM AS
WHERE (. = 0) AND (. = @__id_0) Now I spend quite a bit of time understanding and performance tuning EF Core code, so it’s very useful if I can copy & paste the SQL into something like Microsoft’s SQL Server Management Studio (SSMS) to see how they perform. The problem is I have to hand-edit the SQL to add the correct values to replace any parameters (see @__id_0 in lastcode).
So, in my spare time (??), I decided to try to create some code that would automatically replace the property reference with the actual value. It turns out it’s quite difficult and you can’t quite get everything right, but its good enough to help in lots of places. Here is the story of how I added this feature to my EfCore.TestSupportlibrary.
THE STEPS TO BUILDING A MY DECODEMESSAGE METHOD The steps I needed to do were: * Capture EF Core’s logging output * Turn on EnableSensitiveDataLogging * Catch any EF Core CommandExecuted logs * Decode the Parameters * Replace any property references in the SQL with the ‘correct’parameter value
Now I did say I was going to keep this article short so I’m going to give you some code that handles the first three parts. You can see the EnableSensitiveDataLogging method near the end of building theoptions.
var logs = new List.Options;
using (var context = new BookContext(options)){
//… now start using context > NOTE: Sensitive data logging is fine in your unit tests, but you > should NOT have sensitive data logging turned on in production. > Logging the actual data used is a security risk and could break some > user privacy rules like GPRS. In fact I have methods in my EFCore.TestSupport library that handle building the options and turning on sensitive logging, plus a load of other things. Here is an example of one helper that creates an in-memory database options, with logging. var logs = new List{
//… now start using context The EfCore.TestSupport library has another version of this that works for SQL Server. It creates a unique database name per class, or per method, because xUnit (the favourite unit test framework for NET Core) runs each test class in parallel. > NOTE: The EfCore.TestSupport uses a logging provider that calls an > action method for every log. This makes it easy to write logs to the > console, or capture them into list. DECODING THE PARAMETERS Having captured the EF Core’s logs now I need to decode the first line that has the parameters. There are a few permutations, but it’s clear that Regex is the way to go. This problem is I’m not an expert on Regex, but LinqPad came to my rescue! LinqPad 5.36 has a very nice Regex tool – the best I have found so far. Here is a screenshot of its regex feature, which is called upvia Ctrl+Shift+F1.
> WARNING: It’s a great tool, but I thought if I saved the code it > would keep the pattern I had created, but it doesn’t. I spent > hours getting the regex right and then lost it when I entered > something else. Now I know its all OK, but be warned. All my trials came up with the following Regex code new Regex(@"(@p\d+|@__\w*?_\d+)='(.*?)'(\s\(\w*?\s=\s\w*\))*(?:,\s|\]).*?"); If you don’t know regex then it won’t mean anything to you, but it does the job of finding the a) param name, b) param value, and c) extra information on the parameter (like its size). You can see the whole decode code here.
LIMITATIONS OF THE DECODING It turns out that EF Core’s logged data doesn’t quite give you all you need to perfectly decode the log back to correct SQL. Here are the limitations I found: * You can’t distinguish the different between an empty string and a null string, both are represented by ”. I decided to make ”return NULL.
* You can’t work out if it’s a byte or not, so byte is treated as a SQL string. This will FAIL in SQL Server. * You can’t tell if something is a Guid, DateTime etc., which in SQL Server need ” around them. In the end I wrapped most things in ”, including numbers. SQL Server accepts numbers as strings (but other databases won’t). EXAMPLE OF A DIFFERENT DECODED SQL If we go back to the book lookup at the start of this article then the decoded result is shown below SELECT TOP(2) ., ., ., ., .
, ., ., .
FROM AS
WHERE (. = 0) AND (. = '1') As you can see on the last line the integer is represented as a string. This isn’t the normal way to do this but works in SQL Server. I took the decision to wrap things that I didn’t think were strings because this what is needed to make other types, such as GUIDs, Datetime etc. to work. My really complex test contained lots of different NET Types, and hereis the output.
SET NOCOUNT ON;
INSERT INTO (
, , , ,
, , ,
, , , ,
, , ,
, , )
VALUES ('ascii only', 1, NULL, '0x010203', '2000-01-02T00:00:00', NULL, '2004-05-06T00:00:00.0000000+01:00', '3456.789', '5678.9012', 'ba65d636-65d4-4c07-8ddc-50c615cef539', NULL, '1234', NULL, 'string with '' in it', NULL, NULL, '04:05:06');SELECT
FROM
WHERE @@ROWCOUNT = 1 AND = scope_identity(); In this complex version the parts that fail are: * The MyByteArray has ” around it and FAILS – taking off the string delimiters fixes that. * The MyStringEmptyString is set to NULL instead of an empty string. Not perfect, but quite usable. HOW CAN YOU ACCESS THIS CODE? If you just want to use this feature its build into the latest EfCore.TestSupport NuGet package (1.7.0 to be precise). Its build intothe LogOutput class
which is used by the loggers in this library. There are methods that create options for SQLite (in-memory) and SQL Server database that allow logging. There are plenty of examples of these in the library – have a look at the unit tests for this in the TestEfLoggingDecodeBookContext class.
If you want to play with the code yourself then take a copy of the EfCoreLogDecoder class which contains the decode parts.CONCLUSION
Well it was a bit of fun, maybe not something I would do on a job but still a useful tool. I was a bit disappointed I couldn’t decode the log completely but what it does is still useful to me. Maybe you will find it useful to you too. Now I need to get back to my real work for my clients. See you on theother side!
Happy coding.
Categories .NET Core, Entity
Framework
0
Comments
BUILDING A ROBUST CQRS DATABASE WITH EF CORE AND COSMOS DB Last Updated: February 25, 2019 | Created: February 23, 2019 Back in 2017 I wrote about a system I built using EF Core 2.0 which used Command and Query Responsibility Segregation (CQRS) pattern that combines a SQL Server write database a RavenDB NoSQL read database. The title of the article is “EF Core – Combining SQL and NoSQL databases for betterperformance
”
and it showed this combination gave excellent read-side performance. Ever since then I have been eagerly waiting for Entity Framework Core’s (EF Core) support of Cosmos DBNoSQL
database, which is in preview in EF Core 2.2. RavenDB worked well but having a NoSQL database that EF Core can use natively makes it a whole lot easier to implement, as you will see in this article. I’m writing this using the early EF Core 2.2.0-preview3 release of the Cosmos database provider. This preview works but is slow, so I won’t be focusing on performance (I’ll write a new article on that when the proper Cosmos DB provider is out in EF Core 3). What I will focus on is providing a robust implementation which ensures that the two databases are kept in step. The original CQRS design had a problem if the NoSQL RavenDB update failed: at that point the two databases were out of sync. That was always nagging me, and Roque L Lucero P called me out on this issue on the original article (see this set of comments on that topic). I decided to wait until EF Core support of Cosmos DB was out (that has taken longer than originally thought) and fix this problem when I did the Cosmos DB rewrite, which I have now done. > NOTE: This article comes with a repo containing all the code and a > fully functional example – see > https://github.com/JonPSmith/EfCoreSqlAndCosmos. You can run the > code need SQL Server (localdb is fine) and the Cosmos DB emulator> .
> It will auto-seed the database on startup.TR; DR; – SUMMARY
* For an introduction to CQRS pattern read this excellent articleon
the Microsoft’s site. * I implement a two-database CQRS pattern, with the write database being a relational (SQL Server) database and the read database being a NoSQL database (Cosmos DB). * This type of CQRS scheme should improve the performance of read-heavy applications, but it is not useful to write-heavy applications (because the writes take longer). * My implementation uses EF Core 2.2 with the preview Cosmosdatabase provider.
* I implement the update of the NoSQL inside EF Core’s SaveChanges methods. This means a developer cannot forget to update the read-side database because its done for them by the code inside SaveChanges. * I use a SQL transaction to make sure that both the SQL and the NoSQL database updates are both done together. This means the two databases with always be in step. * This design of CQRS pattern is suitable for adding to a system later in its development to fix specific performance issues. * There is an example application on GitHub available to go withthis article.
SETTING THE SCENE – USING CACHING TO IMPROVE READ PERFORMANCE _You can skip this section if you already understand caching and theCQRS pattern._
In many applications the database accesses can be a bottleneck on performance, i.e. the speed and scalability of the application. When using EF Core there are lots of things you can do to improve the performance of database accesses, and its also really easy to do things that give you terrible performance. There are only two hard things in Computer Science: cache invalidation and naming things. – Phil Karlton.But what do you do when even the best SQL database queries are deemed “too slow”? One typical approach is to add cachingto your
application, which holds a copy of some data in a form that can be accessed quicker than the original source. This is very useful, but making sure your cached version is always up to date is very hard (the Phil Karton quote comes from an article on CQRS written by MateuszStasch
)
Caching can occur in lots of places, but in this article I cover caching at the database level. At the database level the caching is normally done by building “ready to display” versions for data. This works well where the read requires data from multiple tables and/or time-consuming calculations. In my book “Entity Framework Core in Action ” I use a book selling application (think super-simple Amazon) as an example because it contains some complex calculations (form the list of authors, calculate the average review stars etc.). You can see a live version of the book app at http://efcoreinaction.com/. You can do this yourself by building a pre-calculated version of the book list display (I did it in section 13.4 of my book), but its hard work and requires lots of concurrency handling to ensure that the pre-calculated version is always updated property. But the CQRS pattern makes this easier to do because it splits the write and read operations. That makes it simpler to catch the writes and deliver the reads. See the figure taken from Microsoft’s CQRS article(See this link for
authors and Creative Commons Attribution Licence for this figure). A further step I have taken is to have two databases – one for write and one for reads. In my case the read data store is a Azure Cosmos DB – according to Microsoftan
“highly responsive, low latency, high availability, scalable” NoSQL database. The figure below gives you a high-level view of what I am going to describe. The rest of the article describes how to build a two-database CQRS database pattern using EF Core with its new support for the Cosmos DB NoSQL database. The design also includes one way to handle the “cache invalidation” problem inherent in having the same data in two forms, hence the “robust” word in the title. DESCRIBING THE SOFTWARE STRUCTURE With any performance tuning you need to be clear what you are trying to achieve. In this case my tests show it gets slow as I add lots of books but buying a book (i.e. creating a customer order) is quick enough. I therefore decide to minimise the amount of development work and only apply the CQRS approach for the book list, leaving the book buying process as it was. This gives me a software structure where I have one Data Layer, but it has two part: SQL Server (orange) andCosmos DB (purple).
I am really pleased to see that I can add a CQRS implementation only where I really need it. This has two big benefits: * I only need to add the complexity of CQRS where it’s needed. * I can add a CQRS system to an existing application to just performance tune specific areas. Most applications I see have lots of database accesses and many of them are admin-type accesses which are needed, but their performance isn’t that important. This means I only really want to add CQRS where it’s needed, because adding CQRS is more work and more testing. I want to be smart as to where I spend my time writing code and many database accesses don’t need the performance improvements (and complexities!) that CQRS provides. But the best part is by implementing my CQRS system inside EF Core’s SaveChanges methods I know that any existing database changes HAS to go though me code. That means if I’m adding my CQRS system to an existing project I know I can catch all the updates so my NoSQL (cache) values will be up to date. MAKING MY UPDATE ROBUST – USE A TRANSACTION As well as using the new Cosmos DB database provider in EF Core I also want to fix the problem I had in my first version of this CQRS, two-database pattern. In the previous design the two database could get out of step if the write to the NoSQL database failed. If that happens then you have a real problem: the book information you are showing to your users is incorrect. That could cause problems with your customers, especially if the price on the book list is lower that the price at checkout time! There are lots of ways to handle this problem, but I used a feature available to me because I am using a SQL database as my primary database – a SQL transaction (see previous diagram). This is fairly easy to do, but it does have some down (and up) sides. The main one is that the write of data is slower because SaveChanges only returns when both writes have finished. But there is an up side to this: it solves what is known as the “eventually consistent” problem where you do an update but when the app returns the data on your screen hasn’tupdated yet.
Jimmy Bogard has an excellent series called “Life Beyond Distributed Transactions: An Apostate’s Implementation” in the 8th article inthe series
he talks about using a transaction in a SQL database to ensure the second update is done before exiting. Jimmy is very clear that too many people ignore these errors – as he says in his tweet“hope is
not a strategy”!
Jimmy’s approach is easy to understand, but if I used his approach I would have to find and replace every update path with some special code. In EF Core I can fix that by moving the code inside the SaveChanges methods, which means all the checks and updates are done whenever I create, update or delete anything that would change the book list display. That way I, or any of my colleagues, can’t forget to do the NoSQL update. LET’S GET INTO THE CODE! The whole process is contained in the SaveChanges (sync and async) methods. Below is the sync SaveChanges code. public override int SaveChanges(bool acceptAllChangesOnSuccess){
if (_bookUpdater == null) //if no bookUpdater then run as normal return base.SaveChanges(acceptAllChangesOnSuccess);try
{
var thereAreChanges = _bookUpdater .FindBookChangesToProjectToNoSql(this); //This stops ChangeTracker being called twice ChangeTracker.AutoDetectChangesEnabled = false; if (!thereAreChanges) return base.SaveChanges(acceptAllChangesOnSuccess); return _bookUpdater .CallBaseSaveChangesAndNoSqlWriteInTransaction(this,
() => base.SaveChanges(acceptAllChangesOnSuccess));}
finally
{
ChangeTracker.AutoDetectChangesEnabled = true;}
}
There are lots of lines, many of them are to make the code run efficiently, but in the code there are two methods that manage thebook list update.
* FindBookChangesToProjectToNoSql (Lines 8 and 9). This uses EF Core’ ChangeTracker to find changes to entity classes that will affect the book list display. * CallBaseSaveChangesAndNoSqlWriteInTransaction (lines ?? to ??). This is only called if NoSQL writes are needed and it handles the secure update of both the SQL database and the Cosmos database. Now we will look at the two parts – finding the changes and then saving the changes securely. FINDING THE BOOK CHANGES Finding the book changes is complex, but this article is really about making a robust CQRS system with Cosmos DB, so I’m only going to skip over this code and just give you a diagram of how the entity’sState
is used to decide what changes should be applied to the NoSQLdatabase.
The diagram starts with the book list that the NoSQL database is going to provide with the list of entity’s that effect that book list. The table at the bottom shows how I use the entity’s State to decide what changes need to be applied to the NoSQL database. The basic idea is the Book’s State takes precedent, with changes in the associated relationships only causing an update to the book list. There are some subtle items, especially around soft delete, which you can see in the BookChangeInfo class.
> NOTE: The actual code to work out the updates needed is quite > complex, but you can see it in the accompanying example repo> by
> starting in the SQL DbContext’s SaveChanges>
> and follow the code. I cover how I decoded the entity State in > section 14.2.4 of my book “Entity Framework Core in Action> ”.
The end of all this is there a series of book list changes that must be applied to the NoSQL database to make it match what the data in the SQL database. The trick is to make sure that anything will make the two databases out of step from each other, which I cover next. UPDATING THE DATABASES IN A SECURE MANNER To make sure my SQL and NoSQL databases always in step I apply the both database updates inside a SQL transaction. That means if either of the updates, SQL or NOSQL, fail then neither are applied. The code below shows how I do that. private int RunSqlTransactionWithNoSqlWrite( DbContext sqlContext, Func{
if (sqlContext.Database.CurrentTransaction != null) throw new InvalidOperationException( "You can't use the NoSqlBookUpdater if you are using transactions."); var applier = new ApplyChangeToNoSql(sqlContext, _noSqlContext); using (var transaction = sqlContext.Database.BeginTransaction()){
var result = callBaseSaveChanges(); //Save the SQL changes applier.UpdateNoSql(_bookChanges); //apply changes to NoSql database _noSqlContext.SaveChanges(); //And Save to NoSql database transaction.Commit();return result;
}
}
Using a SQL transaction is a nice way to implement this, but you must apply the NoSQL database update at the end of the transaction. That’s because Cosmos DB database provider does not support transactions (most NoSQL don’t support transactions) which means the NoSQL write cannot be rolled back (i.e. undone). That means you can’t use this approach inside another transaction, as you could do something after the NoSQL update that errored (hence the check on line4).
Here is the sync UdateNoSql method (there is a similar Async version). I use AutoMapper’s ProjectTo methodto
create the book list version needed for the display. public bool UpdateNoSql(IImmutableList{
if (_noSqlContext == null || !booksToUpdate.Any()) return false; foreach (var bookToUpdate in booksToUpdate){
switch (bookToUpdate.State){
case EntityState.Deleted:{
var noSqlBook = _noSqlContext.Findbreak;
}
case EntityState.Modified:{
//Note: You need to read the actual Cosmos entity because of the extra columns like id, _rid, etc. //Version 3 of EF Core might make Attach work. //See https://github.com/aspnet/EntityFrameworkCore/issues/13633 var noSqlBook = _noSqlContext.Findbreak;
}
case EntityState.Added: var newBook = _sqlContext.Setbreak;
default:
throw new ArgumentOutOfRangeException();}
}
return true;
}
ALTERNATIVE WAYS OF MAKE THE UPDATE ROBUST I think using a transaction is a simple way to ensure both databases are in step, but as Rafael Santos say in his comment it does make the SaveChanges take a lot longer, because it only returns when both writes are finished. In my first approach using RavenDB the write went from 13ms to 35ms.
That means this approach is OK for say an e-commerce product list (i.e. products that customer can buy) where there are lots of reads but only a few write, but it wouldn’t work so well for say stock control (i.e. what you have in warehouse), where there are lots ofwrites happening.
I have been thinking about this and here are some ideas for you think about. Note: all of these approaches suffer with the the “eventually consistent” problem I mentioned before, i.e. the system will return to the user before the data they were looking at has been updated. 1. SEND A MESSAGE INSIDE A TRANSACTION. In this case you would fire-and-forget a message to another system via a reliable queue, like RabbitMQ or AzureService Bus
.
Then it is the job of the system that gets that message to make sure the NoSQL update is repeated until it works. The SaveChanges should return more quickly because queuing the message will be quick. This is what Jimmy Bogard does in his relational article in his series called Life Beyond Distributed Transactions: An Apostate’s Implementation. Do have a read. 2. RUN A BACKGROUND TASK TO FIX ANY FAILURES. If you added a LastUpdated DateTime to all the SQL entities, and a similar LastUpdated in the NoSQL cached version, then you have a method to find mismatches. This means you could looks for changes since it last run and check the SQL and NoSQL version have matching LastUpdated values (Cosmos DB has a “_ts” Unix-styled timestamp that may be useful). Either you run the method every x mins (simple, but not that good) or you catch a NoSQL error and run the method looking for updates equal to LastUpdated time of the SQL update. 3. (ADVANCED) USE THE CHANGETRACKER.STATECHANGED EVENT.
There is a really nice, but not much talked about, feature called ChangeTracker.StateChanged event that happens after the SaveChange has completed. This gives you a nice solution that only kicks in for a specific update error. Basically you could kick off an timer for every NoSQL write, which is cancelled by the NoSQL ChangeTracker.StateChanged event that occurs on a successful write (status changes to Unchanged if successful). If the timer timed out, then you know the NoSQL update failed and you could take remedial action to fix it. This if advanced stuff needing a ConcurrentDictionary to track each write. I have thought about it, but not implemented it yet. However if a client wanted me to add a CQRS pattern with a quick writes to their application, then this is most likely what I would build. LIMITATIONS IN COSMOS SUPPORT IN 2.2 PREVIEW This application was built with EF Core 2.2 and the Microsoft.EntityFrameworkCore.Cosmos package 2.2.0-preview3-35497 . This is a very early version of Cosmos DB support in EF Core with some limitations on this application. They are * This Cosmos DB preview is very slow! (like hundreds of milliseconds). Version 3 will use a new Cosmos DB SDK, which will befaster.
* I would have liked the Cosmos id value to be the same as theBookId GUID string.
* I couldn’t use Attach for the update of an existing NoSQL database, which would have beenquicker.
You can track what is happening to EF Core support for Cosmos here. I will
most likely update the application when Version 3 is out and write an article on its performance.CONCLUSION
I am very happy to see EF Core supporting NoSQL databases alongside relational databases. It gives me more flexibility in using the right database based on business needs. Also EF Core has the depth and flexibility for me to implement quite complex state management of my database writes, which I needed to implement my CQRS two-databasedesign.
Personally, I like the CQRS two-database design because it too allows me flexibility – I can add it only to the queries that need performance tuning and its also fairly easy to add retrospectively to an application that uses EF Core and relational (SQL) databases. Most performance tuning is done late, and my design fits in with that. The next stage is to see what performance gains I can get with EF Core version 3. In my original version of CQRS with RavenDB I got very good performance indeed. I’ll let you know how that goes when EF Core 3 is out!Happy coding.
Categories .NET Core, Entity
Framework
,
NoSQL 5
Comments
HANDLING ENTITY FRAMEWORK CORE DATABASE MIGRATIONS IN PRODUCTION –PART 2
Last Updated: August 9, 2019 | Created: January 30, 2019 This is the second in the series on migrating a database using Entity Framework Core (EF Core). This article looks at APPLYING A MIGRATION TO A DATABASE and follows on from part 1 which covered how to CREATE A MIGRATION SCRIPT.
If you haven’t read the part 1 some parts of this article won’t make sense, so here is a very quick review of part 1. * There are two types of migrations that can be applied to adatabase:
* Adds new tables, columns etc, known as a _non-breaking change_(easy).
* Changes columns/tables and needs to copy data, known as a _breaking change_ (hard). * There are two main ways to apply a migration to a database * Use EF Core migration feature * Use EF Core to create a migration and then hand-modify themigration.
* Use a third-partly migration builder write in C# the migration. * Use a SQL database comparison tool to compare databases and output a SQL changes script. * Write your own SQL migration scripts by copying EF Core’s SQL. So, now that you know how to create migration scripts now I’m going to look at the different ways you can apply a migration to a production database, with all the pros, cons and limitations. TL;DR – SUMMARY OF THE CONTENT NOTE: Click the links to go directly to the section covering thatpoint.
* The type of application you have affects the migration approachcan use
.
* You must think about what errors could occur and have a plan. * There are four ways to apply a migration to a database * Calling context.Database.Migrate() on startup – easy, but has some significant problems that limits itsusefulness.
* Calling context.Database.Migrate() via a console app – easy, and works well, especially in a deployment pipeline * Outputting EF Cores migration as a SQL script and execute that script on the target database – hard, but gives you great control. * Use a database migration application tool to apply your own SQLscripts
– hard, but gives you great control. * Three different levels of applying a migration. * Stopping the application while you migrate the database is the safest option, but not always possible.
* Some, but not all, non-breaking changes can be applied to a database while the application is running.
* For continuous service applications (runs 24/7) then applying breaking changes needs five steps.
SETTING THE SCENE – WHAT SORT OF APPLICATION HAVE YOU GOT? In part 1 we were focused on creating migrations that were “valid” and whether the migration is a non-breaking change or breaking change (see quick definition at start of this article, or this link for section in part 1). Now we are looking at applying a migration to a database, but the options we have depends on the application (or applications) that are accessing the database. Here are questions you need to think about. * Is there only one application accessing that database, or is your application a web app which is scaled-out,
i.e. there are multiple versions of your application running at the same time. If your application is scaled-out, then this removes one ofthe options.
* Can you stop your application while you apply a migration to the database, or is your application providing a continuous (24/7) service? Updating continuous service applications bring some challenges when it comes to applying a breaking change. When it comes to migrating a production database being a bit paranoidis OK.
As I said at the end of part 1 – the scary part comes when you apply a migration to a production database. Changing a database which contains business-critical data needs (demands!) careful planning and testing. You need to think about what you are going to do if (when!) a migration fails with an error. When considering the different ways to apply a migration you should have in the back of your mind “what happens if there is an error?”. This might push you to a more complex migration approach because its easier to test or revert. I can’t give you rules or suggestions as each system is different but being a bit paranoid about failures isn’t a bad thing to have. I should make you build a system for migrating your application and its database that is more robust. PART2: HOW TO APPLY A MIGRATION TO A DATABASE. The list below gives the different ways you can apply a migration to a database. I list three options for the EF Core case: the first being the simplest, but it has limitations which the other two options don’t have. The SQL migration has no real limitations, but it does need a database migration application tool to apply the SQL scripts only once and in the right order. Here is the list of ways you can apply a migration.* EF Core migration
* Calling context.Database.Migrate() on startup * Calling context.Database.Migrate() via a console app or admincommand
* Outputting the migration as a SQL script and execute that script on the target database.* SQL migrations
* Use a database migration application tool. In the end, how you apply your migration depends on the type of migration (breaking or non-breaking) and the type of application you are updating (single app, multiple apps running in parallel or an app that mustn’t stop). Here is a diagram to try and convey all thesepermutations.
The outer dark blue shows that SQL migrations can be applied in all cases, then the lighter, inner boxes show where different types of EF Core migrations can be added. Here are some clarifying notes about thediagram:
* The diagram shows standard EF migration and hand-modified EF migration, but when I am talking about _applying the migration_ then there is no distinction between the two – we are simple applying anEF Core migration.
* The “five-stage app update” red box in the diagram represents the complex set of stages you need to apply a breaking change to a application that cannot be stopped. I cover that at the end of thearticle.
Now I will go through each of the ways of applying a migration indetail.
1A. CALLING CONTEXT.DATABASE.MIGRATE() ON STARTUP This is by far the easiest way to apply a migration, but it has a big limitation – you should not run multiple instances of the Migrate method at the same time. That can happen if you scale-out a web application. To quote Andrew Lock, “_It’s not __guaranteed__ that this will cause you problems, but unless you’re extremely careful about ensuring idempotent updates and error-handling, you’re likely to get into a pickle_” – see this section of his post “Running async tasks on app startup in ASP.NET Core”.
GOOD PARTS
It is relatively easy to implement (see tips) It ensures the database is up to date before yourapplication runs.
BAD PARTS
You must NOT run two or more Migrate methods inparallel.
If the migration has an error, then your application won’t be available. It’s hard to diagnose startup errorsLIMITATIONS
Does not work with continuous service systemsTIPS
I quite like this option in Andrew Lock’s article for running a migration on startup. I use a similar approach in some of my demo systems that use in-memory databases that need initializing (see this example)
MY VERDICT
If you are running a single web app or similar and you can update the system when no one is using it then this might work for you. I don’t use this as many of my systems I work on use scale-out. 1B. CALLING CONTEXT.DATABASE.MIGRATE() VIA A CONSOLE APP OR ADMINCOMMAND
If you can’t run multiple Migrate methods in parallel, then one way to ensure this is to call the Migrate method inside a standalone application designed to just execute the Migrate method. You might add a console application project to your main web app solution which has access to the DbContext and can call Migrate. You can either run it yourself or let your deployment system run it (Note to EF6.x users – this the equivalent of running Migrate.exe, but with the applicationdll compiled in).
GOOD PARTS
It works in all situations. Works well with deployment systems.BAD PARTS
A bit more work.
LIMITATIONS
– none – , but watch out for continuous, five-stage app updateTIPS
If your console application takes in a connection string to define which database to apply the migration to, then it will be easier to use in your deployment pipeline.MY VERDICT
A good option if you have a deployment pipeline, as you can execute the console application as part of the deployment. If you are manually applying the migration, then there is the command Update-Database. 1C. TURNING EF CORE MIGRATION INTO A SCRIPT AND APPLYING IT TO THEDATABASE
By using the Script-Migration command EF Core will convert a specific migration, or by default all your migrations, into a SQL script. You can then apply this using something that can execute SQL on the specific database you want updated. You can manually execute the SQL in SQL Server Management Studio,
but typically you have something in your release pipeline to do thatat the right time.
GOOD PARTS
It works in all situations. Works well with deployment systems which can useSQL scripts.
You can look at the SQL before its run to see if itlooks OK.
BAD PARTS
More work than the console app (2b) You need some application to apply the script to the correct database.LIMITATIONS
– none – , but watch out for continuous, five-stage app updateTIPS
The SQL contains code to update the migration history, but you must include the idempotent option in the Script-Migration command to get the checks that stops a migration from being applied twice.MY VERDICT
If you want to use EF Core’s Migrate method, then I would suggest using 2b, the console app. It’s as safe as using the scripts and does the same job. But if you pipeline already works with SQL change scripts then this is a good fit for you. 2A. USING A MIGRATION TOOL TO APPLY A SQL SCRIPT If you create a series of SQL migrations scripts, then you need something to a) apply them in the right order and b) apply them only once. EF Core’s migrations contain code that implments the “right order” and “only once” rules, but when we write our own migration scripts we need a tool that will provides those features. I, and many others, use an open-source library called DbUp that provides these features (and more) and also supports a range of database types. I order my
migration scripts alphabetically, e.g. “Script0001 – initial migration”, “Script0002 – add seed data” for DbUp to apply. Just like EF Core migrations, DbUp uses a table to list what migrations have been applied to the database and will only apply a migration if it isn’t in that table. Other migration tools are available, for instance Octopus Deploy , and various RedGate tools(but I haven’t
used them so check they have the correct features).GOOD PARTS
It works in all situations. Works well with deployment systems.BAD PARTS
You have to manage the scripts.LIMITATIONS
– none – , but watch out for continuous, five-stage app updateTips
(for DbUp)
I make a console application that takes in the connection string and then runs DbUp, so I can use it in my deploymentpipeline.
For testing I make the method that runs DbUp available to my unit test assembly in a “only run in debug mode” unit test that migrates my local database correctly using my CompareEfSql tool (see the section about testing migrations in part 1of this series).
MY VERDICT
I use this approach on projects that use EF Core. THE APPLICATION AND APPLYING MIGRATIONS When you apply a migration to the database you can stop the application or in some circumstances you can apply the migration while it is running. In this section I look at the different optionsavailable to you.
1. STOPPING THE APPLICATION WHILE YOU MIGRATE THE DATABASE This is the safest option and works with breaking and non-breaking changes, but your users and your business might not be so happy. I call this the “site down for maintenance” approach. In the “site down” approach is you don’t want to stop an application while users are inputting data or maybe finished an order. That’s how you or your company gets a bad reputation. I had this problem myself back in 2015 and I created a way to warn people that the site was going to close and then stopped all but the admin person from accessing the application. I chose this approach because for the web application I was working on it was a less costly approach than supporting breaking changes while keeping the web app running (I cover applying breaking to a continuous service application later). You may have come across “this site is down for maintenance” on services you use, normally at weekends andovernight.
> NOTE: I wrote an article called How to take an ASP.NET MVC web site > “Down for maintenance> ”
> which you might like to look at – the code was for ASP.NET MVC5 so > it will need some work to get it to work with .NET Core, but the > idea is still valid. APPLYING NON-BREAKING MIGRATIONS WHILE THE APPLICATION IS RUNNING With non-breaking changes you can, in theory, apply them to the database while the old application is running, but there are some issues that can catch you out. For instance, if you added a new, non-null column with no SQL default and old software, which doesn’t know about that new column, tries to INSERT a new row you will get a SQL error because the old software hasn’t provided a value for anon-null column.
But if you know your non-breaking migration doesn’t have a problem then applying the migration while the old application is running provides continuous service to your users. There are various ways to do this, depending on which of the migration application approach you have chosen, one that come to mind are Azure’s staging slots, which have been around for ages, and the newer Azure Pipelines.
APPLYING BREAKING CHANGES TO A CONTINUOUS RUNNING APPLICATION: THE FIVE-STAGE APP UPDATE. The hardest job is applying a breaking change to a continuously running application. In the diagram showing the different approaches to will see a red box called “five-stage app update” in the top-right. The name comes from the fact that you need to migrate in stages, typically five, as shown in the diagram below. > NOTE: Andrew Lock commended that my “add a non-nullable column” > problem I described in the last section can be handled in three > stages: a) add new column but as nullable, b) deploy new software > that knowns about that column, and c) alter the column to be> non-nullable.
Here is a diagram taken from in section 11.5.3 of my book “Entity Framework Core in Action ” which shows the five stages needed to add a breaking change that split an existing CustomerAndAddress table into two tables, Customers and Addresses. As you can see an update like this is complex to create and complex to apply, but that’s the cost of running a continuous system. There aren’t any real alternatives to the five stages, other than you never apply a breaking change to a continuous running system (I have heard one person who said that was their approach). > NOTE: I cover the continuous, five-stage app update in section > 11.5.3 on my book “Entity Framework Core in Action > ” and you can also find a coverage of this > in chapter 5 of the book “Building Evolutionary Architectures> ”
> by Neil Ford et al.CONCLUSION
If the data in your database and the availability of your service is important to your organisation, then you must take a database migration seriously. In part 1 I covered the different ways create a migration script and this article covers how you might apply those migrations to a production database. The aim of this series is to provide you with the options open to you, with their pros, cons and limitations, so that you can take an informed decision about how tohandle migrations.
As I said in the first article my first run-ins with EF migrations was with EF6. I know EF6 very well and having written the book “Entity Framework Core in Action ” I know EF Core even better. The change from EF6 to EF Core around migrations typifies the change in the whole approach in EF Core. EF6 had lots of “magic” going on to make it easier to use – automatic migration on startup was one of them. The problem was, when EF6’s “magic” didn’t quite work, then it was hard to sort it out. EF Core’s approach to migrations is that its up to you where and how you use it – no automatic “magic”. And lots of other small changes to migrations in EF Core come from listening to users of EF4 to 6. So, migrations on production database is scary, but its always been scary. I have given you some insights into the options but that’s only really the minimum for production database changing. Backups, policies, pre-prod testing and deployment pipelines need to be added as required to make a reliable system.Happy coding.
Categories .NET Core, Entity
Framework
3
Comments
POSTS NAVIGATION
Page 1 Page 2 … Page7 Next page
Proudly powered by WordPress We use cookies to ensure that we give you the best experience on our website and monitoring traffic. If you continue to use this site we will assume that you are happy with it.OkDetails
Copyright © 2024 ArchiveBay.com. All rights reserved. Terms of Use | Privacy Policy | DMCA | 2021 | Feedback | Advertising | RSS 2.0