Posts Tagged

Nuget

Distributing AI Skills and MCP via NuGet

With the rise of AI coding assistants like GitHub Copilot, OpenCode, Claude, and others, a new category of files has emerged: Skills, AGENTS.MD, Custom Instructions, MCP configuration, and other AI-related configuration files. These files help AI agents understand our codebase, our patterns, and our preferences, making it more effective.

I found myself in a situation where I needed to share a set of Skills between multiple projects. I was also working on an internal library that LLMs simply did not know about, and I needed a way to teach them.

Found myself implementing Zakira.Imprint. `Zakira` is `Memory` in Arabic

The Problem

I wanted to share a set of AI Skills across multiple projects. At first, I was manually copying files between repositories. This worked fine for a while, but it quickly became a maintenance burden. Every time I updated a skill, I had to remember which projects were using it and manually update each one.

On top of that, I was developing an internal library. LLMs do not have training data for internal libraries (obviously), and while they can sometimes decompile and understand APIs, they often miss the bigger picture; the why behind the library, the common patterns, the pitfalls to avoid, and the best practices.

Sure, if the documentation exists somewhere, you can sometimes point the AI to it. But documentation is often lacking and incomplete. This is especially true for proprietary libraries, where decompiling the code can help with understanding the API surface, but not with the broader context.

The same problem exists with MCP servers. Your service or library might have an associated MCP server that provides dynamic tooling, but users often do not know it exists. Maybe the documentation mentions it somewhere, maybe it does not. Even if it does, there is no guarantee that developers will read that specific section before they start coding. They install the library, start using it, and never discover the MCP server that could have made their workflow significantly better.

What if I Could Ship AI configurations like Skills or MCP configs with the Library Itself?

What if library authors could ship Skills, custom instructions, or even MCP configuration alongside their NuGet packages? This way, when you install a library, the AI assistant would automatically get the context it needs to use that library correctly.

This is not a new concept. We have been doing something similar with Roslyn Analyzers for years. You install a NuGet package, and you get code analysis rules that guide your coding. Why not do the same for AI assistants?

Enter Zakira.Imprint

Imprint is a package that enables distributing AI configurations like Skills and MCPs via NuGet. The concept is simple:

  1. Package your AI Skills (markdown files, custom instructions, MCP configurations, scripts, and any other files) as a NuGet package
  2. When someone adds your package to their project, the skills are automatically copied to .github/skills/, .claude/skills/, or .cursor/rules/ (if any of those agents are detected)
  3. When the package is updated, the skills are updated on the next build
  4. When the package is removed and the project is cleaned, the skills are removed

This approach brings several benefits that I have found invaluable.

Easy to Ship

Instead of manually copying files or maintaining shared repositories, you just pack your skills into a NuGet package. Anyone who wants to use them just adds a package reference:

dotnet add package MyUsefulSkills
dotnet build

That is it. The skills are now installed and ready for AI assistants to use.

Easy to Update

When you publish a new version of your skills package or even MCP configuration, consumers just need to update their package reference. The MSBuild targets detect the version change and automatically replace the old skills with the new ones:

dotnet add package MyUsefulSkills --version 2.0.0
dotnet build
# Skills are automatically updated!

No more “did you remember to copy the new skills?” conversations.

Library Authors Can Teach AI About Their Libraries

This is the part I am most excited about. If you are authoring a library, whether internal or public, you can now ship AI instructions alongside it. Your users install your library, and their AI assistant immediately knows:

  • How to use your APIs correctly
  • Common patterns and best practices
  • Pitfalls to avoid
  • Migration guides between versions

For internal libraries where LLMs have no training data and most importantly, scope, this is a game changer. Instead of the AI guessing (often incorrectly) how to use your library, it gets explicit guidance directly from the library authors.

No Code Changes Required

The skills are installed to each agent’s native directory (e.g., .github/skills/, .claude/skills/, .cursor/rules/) and .gitignore files are automatically generated to prevent tracking. No manual .gitignore configuration is needed.

This means:

  • No code changes to commit
  • Skills are regenerated on every build
  • CI/CD environments get fresh skills on every build

How It Works

The mechanism is similar to how Roslyn Analyzers are distributed. Let me break it down.

The Architecture: Zakira.Imprint.Sdk

At the heart of Imprint is a shared engine package called Zakira.Imprint.Sdk. This package contains compiled MSBuild tasks that handle all the heavy lifting: copying skill files, managing manifests, merging MCP configurations, and cleaning up. Individual skill packages do not duplicate any of this logic, they simply declare what content they ship, and the SDK handles the rest.

The Package Structure

An Imprint package contains:

  • content/skills/**/* – The actual skill files, with any file type, preserving folder structure
  • content/mcp/{PackageId}.mcp.json – MCP server fragment (optional)
  • build/{PackageId}.targets – Auto-generated by the SDK at pack time

The package also takes a dependency on Zakira.Imprint.Sdk, which provides the MSBuild tasks that process these declarations.

NuGet Restore and MSBuild Integration

When NuGet restores a package that contains a build/{PackageId}.targets file, MSBuild automatically imports it. This is standard NuGet behavior, nothing special here. The Zakira.Imprint.Sdk package uses the buildTransitive/ folder convention so its targets are imported transitively through skill packages. Package authors never need to write the .targets file manually; the SDK generates it during dotnet pack.

How Packages Declare Content

Package authors declare content using <Imprint> items directly in their .csproj file:

<itemGroup>
  <Imprint Include="skills\**\*" />   <-!-- Type defaults to "Skill" -->;
  <Imprint Include="mcp\*.mcp.json" Type="Mcp" />
</itemGroup>

At pack time, Zakira.Imprint.Sdk automatically generates the necessary .targets file and includes it in the NuGet package. No manual .targets authoring is required.

The SDK processes all <Imprint> items and generates the appropriate MSBuild targets that will run when consumers build their projects.

Target Execution

The SDK hooks into the build lifecycle with four targets:

  • Imprint_CopyContent (BeforeTargets=”BeforeBuild”) — Copies all declared skill files, writes per-package manifests to .imprint/, creates .gitignore
  • Imprint_CleanContent (AfterTargets=”Clean”) — Reads manifests, deletes only tracked files, removes empty directories
  • Imprint_MergeMcp (BeforeTargets=”BeforeBuild”) — Merges all MCP fragments into .vscode/mcp.json (or equivalent for other agents)
  • Imprint_CleanMcp (AfterTargets=”Clean”) — Removes managed MCP servers, preserves user-defined ones

The key points:

  • Skills are copied before every build (skipping design-time builds for IDE performance)
  • All file types are included, preserving folder structure from the skills/ directory
  • A shared .gitignore at the skills root prevents files from being committed
  • Per-package manifests (.imprint/{PackageId}.manifest) track exactly which files each package installed
  • Skills are cleaned up with dotnet clean — only the specific files from each package are removed

Multi-Package Support

Multiple packages can install skills into the same .github/skills/ (or equivalent) folder. Each package’s skill files are copied preserving their folder structure from the skills/ directory within the package:

.github/
  skills/
    .gitignore
    deployment/ #example
      SKILL.md
    logging/    #Example
      SKILL.md

On clean, the manifest-based tracking ensures each package only removes the specific files it installed, so multiple packages coexist safely.

MCP Server Injection: Beyond Skills

After building the skills distribution, I realized there was another piece of the puzzle missing. Modern AI assistants do not just consume static files, they connect to MCP (Model Context Protocol) servers that provide dynamic tools, resources, and prompts. VS Code discovers these servers through a .vscode/mcp.json (or equivalent) file.

What if an Imprint package could also configure MCP servers? Instead of asking users to manually edit their mcp.json, the package would inject the right server configuration at build time.

How MCP Injection Works

The approach follows the same philosophy as skills distribution: install a package, build, and everything is configured for you.

Each Imprint package that ships an MCP server includes a fragment file — a small JSON file containing its server definitions:

{
  "servers": {
    "azure-mcp-server": {
      "type": "stdio",
      "command": "npx",
      "args": ["-y", "@anthropic-ai/azure-mcp-server"]
    }
  }
}

At build time, the Zakira.Imprint.Sdk engine collects all fragment files from installed packages and merges them into .vscode/mcp.json (or equivalent). The result is a single file that VS Code (or other editors) reads to discover all available MCP servers.

The Hard Part: Not Breaking User Configuration

The tricky part is not the merge itself, it is knowing what to keep and what to remove. Users might have their own servers defined in mcp.json. If an Imprint package is removed, only its servers should be cleaned up. Other servers, including user-defined ones, must survive.

To solve this, I introduced a manifest file (.vscode/.imprint-mcp-manifest) that tracks which server keys are managed by Imprint. This file is automatically gitignored, while mcp.json itself can be committed to source control.

Idempotent and Safe

The merge logic is idempotent: if nothing changed since the last build, mcp.json is not rewritten. This means no unnecessary git diffs. On dotnet clean, only managed servers are removed. If the file has no remaining content after cleanup, it is deleted entirely.

Top-level properties like "inputs" (used by VS Code for secret prompts) are preserved through all operations. Your hand-crafted configuration is never touched.

Adding MCP to Your Package

If you want your Imprint package to inject MCP servers, add two things:

  1. A mcp/<PackageId>.mcp.json fragment file with your server definitions
  2. An <Imprint> item with Type="Mcp" in your .csproj:
<ItemGroup>
    <Imprint Include="mcp\*.mcp.json" Type="Mcp" />
</ItemGroup>

That is it. The Zakira.Imprint.Sdk handles the merge automatically. When a consumer installs your package and builds, they get both the AI skills and the MCP server configuration.

Finding the Right Balance

One concern I had when designing this was overcrowding. What if every NuGet package starts shipping AI skills? Your .github/skills/ folder could become cluttered with files you do not need.

The solution is simple: these are development dependencies. They are marked as PrivateAssets="all" in the package reference, meaning they do not flow to downstream projects. And since a shared .gitignore is placed at the skills root, they do not bloat your repository.

For library authors, I would recommend being intentional about what you include. Ship skills that genuinely help users of your library. Do not include generic programming advice that AI already knows.

On top of that, library authors can choose if Skills and MCP fragments are opt-in or opt-out for consumers. By setting ImprintEnabledByDefault in the package’s .csproj, authors control the default behavior:

<PropertyGroup>
  <ImprintEnabledByDefault>false</ImprintEnabledByDefault> <!-- Opt-in: disabled unless user enables -->
</PropertyGroup>

Consumers can always override this per-package using metadata on their PackageReference:

<PackageReference Include="SomePackage" Version="1.0.0">
  <ImprintEnabled>false</ImprintEnabled> <!-- Disable or enable this package's skills/MCP -->
</PackageReference>

The consumer’s explicit setting always takes priority over the package author’s default.

Two Package Patterns

Imprint supports two patterns for package authors:

Skills-Only Packages

These packages ship only AI skills and MCP configurations, no compiled library code. They are development-time dependencies that leave no trace in the consumer’s build output.

Library + Skills Packages

These packages ship a compiled DLL and AI skills/MCP fragments. The DLL is a real runtime dependency, while the skills teach AI assistants how to use the library correctly.

This is the pattern I am most excited about for internal libraries. Your consumers get the library and the AI guidance in a single dotnet add package.

Creating Your Own Imprint Package

Creating an Imprint package is straightforward. You define your content using <Imprint> items in your .csproj, and the SDK handles the rest.

Project Structure

Create a project with the following structure:

PackageName.csproj
  skills/
    skill-folder-name/
      SKILL.md
  mcp/
    mcp-config-name.mcp.json    (optional)

Configure the .csproj

<ItemGroup>
  <PackageReference Include="Zakira.Imprint.Sdk" Version="1.0.0-preview">
    <PrivateAssets>compile</PrivateAssets>
  </PackageReference>
</ItemGroup>

<ItemGroup>
  <Imprint Include="skills\**\*" />
  <Imprint Include="mcp\*.mcp.json" Type="Mcp" />
</ItemGroup>

For skills-only packages (no compiled DLL), it is recommended to also add the following, although it is not strictly required:

<PropertyGroup>
  <IncludeBuildOutput>false</IncludeBuildOutput>
  <DevelopmentDependency>true</DevelopmentDependency>
</PropertyGroup>

That is it. When you run dotnet pack, the SDK automatically generates the necessary .targets file and includes it in your NuGet package. No manual .targets authoring required.

Use Cases

I have found this pattern useful for several scenarios:

Organization-wide Standards: Package your company’s coding standards, security guidelines, and architectural patterns as skills. Every project that references the package gets consistent guidance.

Framework Best Practices: Create a package with best practices for specific frameworks. For example, AzureBestPractices includes guidance on Azure SDK usage, resource naming, and security patterns.

Internal Library Documentation: Ship your internal library with skills that teach AI how to use it. With the library + skills pattern, your consumers get the DLL and the AI guidance in a single package install. This is especially valuable for complex libraries with non-obvious usage patterns.

MCP Server Distribution: Ship MCP server configurations alongside your skills. Consumers get both static knowledge (skills) and dynamic tooling (MCP servers) from a single NuGet package install.

Team Knowledge Sharing: Package tribal knowledge that would otherwise live in wiki pages or developers’ heads. Make it available to AI assistants so they can help new team members.

Multi-Agent Support: Beyond Copilot

AI assistants are not a monoculture. Teams use Claude, OpenCode, Cursor, and increasingly other tools alongside Copilot. Maintaining separate skill files for each agent is the same copy-paste problem Imprint was built to solve.

Imprint includes multi-agent support out of the box. A single NuGet package distributes skills and MCP configurations to every AI agent simultaneously, placing files in each agent’s native directory structure.

How It Works

Each AI agent has its own conventions for where it looks for skills and MCP configurations:

Agent Skills Path MCP Path MCP Root Key
Copilot .github/skills/ .vscode/mcp.json servers
Claude .claude/skills/ .claude/mcp.json mcpServers
Cursor .cursor/rules/ .cursor/mcp.json mcpServers

Imprint auto-detects which agents you use by scanning for their configuration directories. If .github/ and .claude/ both exist, Imprint targets both. The same skill content is copied to each agent’s native location, and MCP server configs are merged into each agent’s mcp.json.

MCP Schema Transformation

One subtle but important detail: different AI agents use different JSON schemas for their MCP configuration. VS Code/Copilot expects servers under a "servers" root key, while Claude and Cursor expect "mcpServers".

Package authors do not need to worry about this, they always write fragments using "servers":

{
  "servers": {
    "my-server": { "type": "stdio", "command": "npx", "args": [...] }
  }
}

The SDK automatically transforms this to each agent’s expected schema when writing to their mcp.json files. Copilot gets "servers", Claude and Cursor get "mcpServers". The inner server definition is identical across all agents.

Zero Configuration Required

The default behavior is auto-detection. You do not need to change anything — Imprint looks for .github/, .claude/, and .cursor/ directories at build time and targets whichever agents are present. If none are detected, it falls back to Copilot as the default.

If you want explicit control, set a single MSBuild property:

<PropertyGroup>
  <ImprintTargetAgents>claude;github</ImprintTargetAgents>
</PropertyGroup>

Unified Manifest

With multiple agents, tracking which files belong to which agent becomes important for clean operations. Imprint uses a unified manifest format at .imprint/manifest.json that tracks everything in one place:

{
  "version": 2,
  "packages": {
    "Zakira.Imprint.Sample": {
      "files": {
        "copilot": [".github/skills/personal/SKILL.md"],
        "claude": [".claude/skills/personal/SKILL.md"]
      }
    }
  },
  "mcp": {
    "copilot": {
      "path": ".vscode/mcp.json",
      "managedServers": ["sample-echo-server"]
    },
    "claude": {
      "path": ".claude/mcp.json",
      "managedServers": ["sample-echo-server"]
    }
  }
}

On dotnet clean, Imprint reads this manifest and removes exactly the files it installed — across all agents, across all packages. No leftover files, no accidental deletions.

Package Authors Get It for Free

The multi-agent support is entirely in Zakira.Imprint.Sdk. Package authors do not need to do anything special — the same <Imprint> items automatically distribute to every agent the consumer has configured.

Summary

Distributing AI Skills via NuGet is a natural extension of how we already distribute tools like Roslyn Analyzers. With MCP Server Injection and multi-agent support, a single NuGet package can now deliver both static knowledge and dynamic tool configurations to every AI assistant your team uses. It solves real problems:

  • No more copying files between projects
  • Easy updates through package versioning
  • Library authors can teach AI about their libraries
  • MCP servers are configured automatically, no manual mcp.json editing
  • No code changes or repository bloat
  • One package, every AI agent, Copilot, Claude, Cursor, and more

The pattern is simple, it builds on existing NuGet and MSBuild infrastructure, and it just works.

If you maintain an internal library, consider adding AI skills to help users get started. If you have organization-wide standards, package them up. The barrier to entry is low, and the benefits compound as more people adopt the pattern.

You can find the complete source code and examples on GitHub.


Have questions or ideas for improvement? I would love to hear them. Find me on Twitter or GitHub.

When disaster strikes: the complete guide to Failover Appenders in Log4net

Log4Net is a cool, stable, fully featured, highly configurable, highly customizable and open source logging framework for .Net.

One of its powerful features is that it can be used to write logs to multiple targets, by using “Appenders”.
An Appender is a Log4Net component that handles log events; It receives a log event each time a new message is logged and it ‘handles’ the event. For example, a simple file appender will write the new log event to a local file.

Although there are a lot of Log4Net Appenders that are included in the Log4net framework, occasionally we won’t find one that fully satisfies our needs. During a project I was working on, I had to implement a failover mechanism for logging, where the app had to start logging to a remote service, and then failback to a local file system if that remote service wasn’t reachable anymore.

Fortunately, Log4Net allows us implement our own Custom Appenders.
The Appender had to start writing logs to a remote service, and fallback to a local disk file after the first failed attempt to send a log message to that service.

Implementing the Appender

To create a custom Appender we have to implement the IAppender interface. Although easy to implement, Log4Net makes it even simpler by providing the AppenderSkeleton abstract class, which implements IAppender and adds common functionalities on top of it.

public class FailoverAppender : AppenderSkeleton
{
    private AppenderSkeleton _primaryAppender;
    private AppenderSkeleton _failOverAppender;

    //Public setters are necessary for configuring
    //the appender using a config file
    public AppenderSkeleton PrimaryAppender 
    { 
        get { return _primaryAppender;} 
        set 
        { 
             _primaryAppender = value; 
             SetAppenderErrorHandler(value); 
        } 
    }

    public AppenderSkeleton FailOverAppender 
    { 
        get { return _failOverAppender; } 
        set 
        { 
            _failOverAppender = value; 
            SetAppenderErrorHandler(value); 
        } 
    }

    public IErrorHandler DefaultErrorHandler { get; set; }

    //Whether to use the failover Appender or not
    public bool LogToFailOverAppender { get; private set; }

    public FailoverAppender()
    {
        //The ErrorHandler property is defined in
        //AppenderSkeleton
        DefaultErrorHandler = ErrorHandler;
        ErrorHandler = new FailOverErrorHandler(this);
    }

    protected override void Append(LoggingEvent loggingEvent)
    {
        if (LogToFailOverAppender)
        {
            _failOverAppender?.DoAppend(loggingEvent);
        }
        else
        {
            try
            {
                _primaryAppender?.DoAppend(loggingEvent);
            }
            catch
            {
                ActivateFailOverMode();
                Append(loggingEvent);
            }
        }
    }

    private void SetAppenderErrorHandler(AppenderSkeleton appender)
        => appender.ErrorHandler = new PropogateErrorHandler();

    internal void ActivateFailOverMode()
    {
        ErrorHandler = DefaultErrorHandler;
        LogToFailOverAppender = true;
    }
}

The FailoverAppender above accepts two appenders; a primary appender and a failover appender.

By default it will propagate Log events only to the primary appender, but in case an exception is thrown from the primary appender during event logging , it will stop sending log events to that appender and instead it starts propagating log events only to the failover appender.

I’ve used AppenderSkeleton to reference both the primary and the failover appenders in order to utilize a functionality in the AppenderSkeleton class – in this case the ability to handle errors (a.k.a Exceptions) that were thrown during an appender attempt to log an event.
We can do so by assigning the ErrorHandler property defined in AppenderSkeleton an object.

I use the LogToFailOverAppender flag to determine whether we are in ‘normal’ mode or in ‘FailOver’ mode.

The actual logging logic exists in the overridden ‘Append’ method:

protected override void Append(LoggingEvent loggingEvent)
{
    if (LogToFailOverAppender)
    {
        _failOverAppender?.DoAppend(loggingEvent);
    }
    else
    {
        try
        {
            _primaryAppender?.DoAppend(loggingEvent);
        }
        catch
        {
            ActivateFailOverMode();
            Append(loggingEvent);
        }
    }
}

If the LogToFailOverAppender flag is active, it logs events using the failover appender, as it means an exception has been thrown already. Otherwise, it logs events using the primary appender, and it will activate the failover mode, if an exception is thrown during that time.

The following are the IErrorHandlers that I defined and used

/*
This is important. 
By default the AppenderSkeleton's ErrorHandler doesn't
propagate exceptions
*/
class PropogateErrorHandler : IErrorHandler
{
    public void Error(string message, Exception e, ErrorCode errorCode)
    {
        throw new AggregateException(message, e);
    }

    public void Error(string message, Exception e)
    {
        throw new AggregateException(message, e);
    }

    public void Error(string message)
    {
        throw new LogException($"Error logging an event: {message}");
    }
}
/*
This is just in case something bad happens. It signals 
the FailoverAppender to use the failback appender.
*/
class FailOverErrorHandler : IErrorHandler
{
    public FailOverAppender FailOverAppender { get; set; }
        
    public FailOverErrorHandler(FailOverAppender failOverAppender)
    {
        FailOverAppender = failOverAppender;
    }

    public void Error(string message, Exception e, ErrorCode errorCode)
        => FailOverAppender.ActivateFailOverMode();

    public void Error(string message, Exception e)
        => FailOverAppender.ActivateFailOverMode();

    public void Error(string message)
        => FailOverAppender.ActivateFailOverMode();
}

Testing the Appender

I’ve created a config file you can use to test the appender. These are the important bits:

<!--This custom appender handles failovers. If the first appender fails, it'll delegate the message to the back appender-->
<appender name="FailoverAppender" type="MoreAppenders.FailoverAppender">
    <layout type="log4net.Layout.PatternLayout">
        <conversionPattern value="%date [%thread] %-5level %logger - %message%newline"/>
    </layout>

    <!--This is a custom test appender that will always throw an exception -->
    <!--The first and the default appender that will be used.-->
    <PrimaryAppender type="MoreAppenders.ExceptionThrowerAppender" >
        <ThrowExceptionForCount value="1" />
        <layout type="log4net.Layout.PatternLayout">
            <conversionPattern value="%date [%thread] %-5level %logger - %message%newline"/>
        </layout>        
    </PrimaryAppender>

    <!--This appender will be used only if the PrimaryAppender has failed-->
    <FailOverAppender type="log4net.Appender.RollingFileAppender">
        <file value="log.txt"/>
        <rollingStyle value="Size"/>
        <maxSizeRollBackups value="10"/>
        <maximumFileSize value="100mb"/>
        <appendToFile value="true"/>
        <staticLogFileName value="true"/>
        <layout type="log4net.Layout.PatternLayout">
            <conversionPattern value="%date [%thread] %-5level %logger - %message%newline"/>
        </layout>
    </FailOverAppender>
</appender>

In this post I’ll discuss the parts that are relevant to the appender. You can find the full config file here. The rest of the config file are a regular Log4Net configurations, which you can read more about here and here.

Log4Net has a feature that give us an ability to instantiate and assign values to public properties of appenders in the config file using XML. I’m using this feature to instantiate and assign values to both the PrimaryAppender and the FailOverAppender properties.

In this section I’m instantiating the PrimaryAppender:

<PrimaryAppender type="MoreAppenders.ExceptionThrowerAppender" >
    <ThrowExceptionForCount value="1" />
    <layout type="log4net.Layout.PatternLayout">
        <conversionPattern value="%date [%thread] %-5level %logger - %message%newline"/>
    </layout>        
</PrimaryAppender>

The type attribute’s value is the fully qualified name of the appender’s class.
For our example, I’ve created the ExceptionThrowerAppender appender for testing purposes. It can be configured to throw exceptions once per a certain amount of log events.

In a similar manner, in the following XML I’ve instantiated and configured the FailOverApppender to be a regular RollingFileAppender

<FailOverAppender type="log4net.Appender.RollingFileAppender">
    <file value="log.txt"/>
    <rollingStyle value="Size"/>
    <maxSizeRollBackups value="10"/>
    <maximumFileSize value="100mb"/>
    <appendToFile value="true"/>
    <staticLogFileName value="true"/>
    <layout type="log4net.Layout.PatternLayout">
        <conversionPattern value="%date [%thread] %-5level %logger - %message%newline"/>
    </layout>
</FailOverAppender>

I used the following code to create log events:

class Program
{
    static void Main(string[] args)
    {
        XmlConfigurator.Configure();

        var logger = LogManager.GetLogger(typeof(Program));

        for (var index = 0; index < int.MaxValue; ++index)
        {
            logger.Debug($"This is a debug message number {index}");
        }

        Console.ReadLine();
    }
}

I started the program in debug mode and placed a breakpoint inside the ‘Append’ method:

First logging message - goes to the primary appender

Notice how OzCode’s Predict the Future feature marks the if-statement with an X and with a red background, telling us that the condition is evaluated to false. That means an exception wasn’t thrown yet from the primary appender.

In order to make figuring out the loggingEvent message value easier, I’ve used OzCode’s Magic Glance feature to view the necessary information in every LoggingEvent object.

Selecting the properties to show

The result:

Magic Glance feature

By continuing the program, the primary appender will handle the logging event, and it will throw an exception

Exception is thrown.

After that exception is propagated by the ErrorHandler, it will be handled by the catch-clause, which activates the FailOverAppender mode (notice how the log event is sent to the FailOverAppender as well) which would send future logging events only to the FailOverAppender

FailOverAppender mode is active

This time the if-statement is marked by a green ‘V’ . This tell us that the condition is evaluated to be true and that it will execute the if-statement body (sends the logging-event to the failover appender).

You can view and download the code by visiting this GitHub Repository.

Summary

Log4Net is a well-known logging framework for .Net. The framework comes with a list of out-of-the-box Appenders that we can use in our programs.
Although these Appenders are usually sufficient for most of us, sometimes you’ll need Appenders that are more customized for your needs.

We saw how we can use Log4Net’s extensibility features to implement our own Log4Net custom Appenders. In this example, we have created a Fail-over mechanism for Appenders that we can use to change the active Appender when it is failing to append log messages.

Log4Net is highly extensible and it has many more extensibility features that I encourage you to explore.

Note: this post is published also at OzCode’s blog.

Code Generation Chronicles #1 – Using Fody to Inject KnownType attributes

Code Generation Chronicles

As part of my new year resolutions, I’ve decided to put more effort in learning code generation techniques.

This is the first blog post in a series exploring code generation. Although I won’t always dive into the internals, I do promise that I’ll show examples of what we can achieve with each technique and when it is better to use each one.

Fody

Fody is an open source library for IL weaving. It can generate and manipulate IL code during build time.
Getting Fody is easy – it is available as a Nuget Package and doesn’t require any installation/modification to the build system.

One of the benefits of using Fody is that there is no footprint – since the IL weaving is done during build time, there aren’t any Fody-related assemblies needed at runtime, which could be a good thing if you worried about your project dependencies or the number of assemblies you ship with your product.
And on top of that Fody is highly extensible and there is an active open source community around it.

One example for how Fody can save developer’s time is such extension: “PropertyChanged” which works by placing the ImplementPropertyChanged attribute it provides on a class, it will generate all of the code necessary to fully implement the INotifyPropertyChanged interface in that class.
More extensions can be found here.

Using Fody to add WCF’s KnownType attributes

At work we’ve used a client which was implemented using .Net Remoting as the communication model for it services. At some point, we wanted to exchange .Net Remoting with the more modern WCF. The problem was that a lot of the code base was unmaintainable legacy code, and we’ve wanted to perform as little changes as possible, and reuse as much of the existing architecture as we could.
One of the challenges resided in a WCF quirk, were by default it doesn’t allow passing a derived type of the DataContract type that is defined in the OperationContract.
The way to make derived classes work is done in WCF by adding a KnownType attribute on the base type for each derived type we use.
As an example, if we have a class A and two derived classes B and C, we would have to add two KnownType attributes over class A; one for Class B and one for Class C.

[KnownType(typeof(B))]
[KnownType(typeof(C))]
class A
{
}

class B : A
{
}

class C : A
{
}

Since this scenario was common in the current architecture, we were looking for a way to solve this issue without manually adding KnownType attributes, since it would be easy to miss some or to forget adding the attribute over new derived classes in the future.

In the end we’ve managed to save time and replace the communication layer easily by using Fody for automatically adding KnownType attributes for all derived types on their base classes during build time.

Implementing a Fody extension

If you want to write Fody extension start by cloning the BasicFodyAddin repository. This repository is maintained by the open source community and it simplifies both the implementation and the deployment of the extension as a Nuget Package.
BasicFodyAddin source code contains a Solution with four C# projects:

  1. BasicFodyAddin which will contain the extension
  2. Nuget which is for deploying a Nuget package
  3. Tests for writing unit tests
  4. AssemblyToProcess that is used as a target assembly for testing your mew Fody extension.

We will focus on BasicFodyAddin.

In the BasicFodyAddin project there is a file called ModuleWeaver.cs Go ahead and replace it content with the following code:

using System;
using System.Linq;
using Mono.Cecil;
using Mono.Cecil.Rocks;

namespace KnownTypes.Fody
{
   public class ModuleWeaver
   {
      public ModuleDefinition ModuleDefinition { get; set; }
   
      public void Execute()
      {
      }

      void AddKnownTypeAttributes(ModuleDefinition module, TypeDefinition baseType)
      {
      }

      void RemoveKnowsDeriveTypesAttribute(TypeDefinition baseType)
      {
      }
   }
}

This is the class we will use to implement the extension. The ModuleDefinition property will be populated during build time by Fody and it will contain the target Module for the IL Weaving.
As you have probably noticed, the property is of type ModuleDefinition.
ModuleDefinition is a class representing a MSIL Module in Mono.Cecil which is a library Fody relies on to generate and manipulate IL code. In addition to ModuleDefinition we will use more types it defines such as TypeDefinition.

When using Fody your ModuleWeaver class must be implemented according the following:

  • Be public, instance and not abstract.
  • Have an empty constructor.
  • Have a ModuleDefinition property that will be populated during build.
  • Have an Execute method.

The KnownTypeAttributeWeaver class has three main methods:

  • Execute – Entry point. It calls other methods for performing the IL weaving.
  • AddKnownTypeAttributes – Decorates base types with KnownType attributes.
  • RemoveKnowsDeriveTypesAttribute – Removes KnowsDeriveTypes attributes from base types

Note: A few helper methods will be added down the road.

 

Now create another a new class called KnowsDeriveTypesAttribute.

[AttributeUsage(AttributeTargets.Class, Inherited = false)]
public class KnowsDeriveTypesAttribute : Attribute
{
}

This attribute is for explicitly specifying the base types we want to decorate with KnownType attributes. We could have added KnownType attributes over every base class in the assembly, but we wanted the extra control.

Implementing Execute

public void Execute()
{
   foreach (var type in ModuleDefinition.GetTypes().Where(HasKnownDeriveTypesAttribute))
   {
      AddKnownTypeAttributes(module, type);
      RemoveKnowsDeriveTypesAttribute(type);
   }
}

//A helper method 
//For filtering the types without the KnowsDeriveTypes Attribute.
bool HasKnownDeriveTypesAttribute(TypeDefinition type) => 
   type.CustomAttributes.Any(attr => attr.AttributeType.FullName == typeof(KnowsDerivativeTypesAttribute).FullName);

In Execute we locate all the types decorated with KnowsDeriveTypesAttribute, and we decorate them with KnownType attributes according to their sub types.
After that is done, we remove the KnowsDeriveTypesAttribute as it isn’t necessary anymore.

Implementing AddKnownTypeAttributes

void AddKnownTypeAttributes(ModuleDefinition module, TypeDefinition baseType)
{
   //Locate derived types
   var derivedTypes = GetDerivedTypes(module, baseType);

   //Gets a TypeDefinition representing the KnownTypeAttribute type.
   var knownTypeAttributeTypeDefinition = GetTypeDefinition(module, "System.Runtime.Serialization", "KnownTypeAttribute");

   //Gets the constructor for the KnownTypeAttribute type.
   var knownTypeConstrcutor = GetConstructorForKnownTypeAttribute(module, knownTypeAttributeTypeDefinition);

   //Adds KnownType attribute for each derive type
   foreach (var derivedType in derivedTypes)
   {
      var attribute = new CustomAttribute(constructorToUse);
      attribute.ConstructorArguments.Add(new CustomAttributeArgument(constructorToUse.Parameters.First().ParameterType, derivedType));

      baseType.CustomAttributes.Add(attribute);
   }
}

Let’s break it down.

//Locate derived types
var derivedTypes = GetDerivedTypes(module, baseType);

//Gets a TypeDefinition representing the KnownTypeAttribute type.
var knownTypeAttributeTypeDefinition = GetTypeDefinition(module, "System.Runtime.Serialization", "KnownTypeAttribute");

//Gets the constructor for the KnownTypeAttribute type.
var knownTypeConstrcutor = GetConstructorForKnownTypeAttribute(module, knownTypeAttributeTypeDefinition);

In the code above we are performing three steps:

  1. Finding and retrieving all derived types of the given base type.
  2. Creating a TypeDefinition instance for the KnownType attribute.
  3. Finding and retrieving a constructor for the KnownType attribute.

The first step is done using a helper method called GetDerivedTypes.

Given a module and a type it returns an array of TypeDefinitions, containing all of the types in the given module that derive from the given type

The second step is done using a helper method called GetTypeDefinition.

Given an assembly name and a type name, it returns a TypeDefinition for that type.

The third step is done using a helper method called GetConstructorForKnownTypeAttribute.

Given a ModuleDefinition and a TypeDefinition it returns a the first constructor for that type.

//A helper method. Given a module and a baes type:
//It returns all derived types of that base type.
TypeDefinition[] GetDerivedTypes(ModuleDefinition module, TypeDefinition baseType)
   => module.GetTypes()
         .Where(type => type.BaseType?.FullName == baseType.FullName)
         .ToArray();

//A helper method. Given an assembly and a type name:
//It returns a TypeDefinision for that type.
TypeDefinition GetTypeDefinition(ModuleDefinition module, string assemblyName, string typeName) 
   => module.AssemblyResolver
         .Resolve(assemblyName)
         .MainModule.Types.Single(type => type.Name == typeName);

//A helper method. Given a module and type definition for the KnownType Attribute:
//It returns the constructor for the attribute accepting a System.Type object.
MethodReference GetConstructorForKnownTypeAttribute(ModuleDefinition module, TypeDefinition knownTypeAttributeTypeDefinition)
{
   var constructorMethodToImport = knownTypeAttributeTypeDefinition
                                     .GetConstructors()
                                     .Single(ctor => 1 == ctor.Parameters.Count && "System.Type" == ctor.Parameters[0].ParameterType.FullName);

   return module.Import(constructorMethodToImport);
}

Let’s continue and break down the last part of AddKnownTypeAttributes.

//Adds KnownType attribute for each derive type
foreach (var derivedType in derivedTypes)
{
   var attribute = new CustomAttribute(constructorToUse);
   attribute.ConstructorArguments.Add(new CustomAttributeArgument(constructorToUse.Parameters.First().ParameterType, derivedType));

   baseType.CustomAttributes.Add(attribute);
}

In this part of the code we add a KnownType attribute to the base type for each derived type we previously found.
We are using the KnownType attribute’s constructor that expects the System.Type of the sub class.

Implementing RemoveKnowsDeriveTypesAttribute

This part is straight forward. We locate and then remove the KnowsDeriveTypes attribute from the base type.

void RemoveKnowsDeriveTypesAttribute(TypeDefinition baseType)
{
   var foundAttribute = baseType.CustomAttributes
         .Single(attribute => attribute.AttributeType.FullName == typeof(KnowsDeriveTypesAttribute).FullName);

   baseType.CustomAttributes.Remove(foundAttribute);
}

Unit Tests & Debugging

Unit Tests are always important, especially when writing Fody extensions. We had a good code coverage in the final product, but for this blog post I’ll use Unit Tests solely for debugging purposes, as I will be able to debug the extension by running the Unit Tests in debug mode.

For this post I’ve created two Unit Tests. One for testing the addition of the KnownType attributes and one for testing the removal of the KnowsDeriveTypes attribute. You can view the Unit Tests file here. I’ve also went ahead and added three classes to the AssemblyToProcess project for testing the IL Weaving process.

Even though Fody has a basic logging capability it isn’t always easy to find and view logs in the output window.
Fortunately, at debug time we can use OzCode’s Trace Points feature. I’ve added a trace point at the line which adds a KnownType attribute over base types.

//Trace point was added here in the AddKnownTypeAttributes method.
baseType.CustomAttributes.Add(attribute);

Read More