profile
viewpoint
If you are wondering where the data of this site comes from, please visit https://api.github.com/users/poke/events. GitMemory does not store any data, but only uses NGINX to cache data for a period of time. The idea behind GitMemory is simply to give users a better reading experience.

ctolkien/SodaPop.ConfigExplorer 35

Displays diagnostic information on all the properties registered with configuration

ctolkien/SodaPop.RazorPagesSitemap 3

Generates a sitemap based on the layout of your Razor Pages website.

poke/AspNetCore-RazorBlazor 3

RazorBlazor combination example

Felixomni/gw2wiki-android 2

Guild Wars 2 Wiki app for Android

poke/angular-oauth2-oidc 0

Support for OAuth 2 and OpenId Connect (OIDC) in Angular.

poke/anticontainer 0

DownThemAll! AntiContainer

poke/aspnet-blazor-issue27716 0

https://github.com/dotnet/aspnetcore/issues/27716

poke/aspnet-logging-issue697 0

FileLoadException for System.Runtime.InteropServices.RuntimeInformation on old-style .csproj test projects

poke/aspnet-mvc-issue3196 0

https://github.com/aspnet/Mvc/issues/3196

issue commentdotnet/aspnetcore

Making HttpContext.User available to 3rd party code without Microsoft.AspNetCore.Http dependency

Remembering how many issues we had in the past with the usage of IHttpContextAccessor where users incorrectly used it all over places because it was there, I would also agree that we should hesitate to add another thing alike that for the current principal. If Thread.CurrentPrincipal is backed by an async local, then this would include that.

I’m more than fine with adding a dedicated article in the documentation about how one could possibly approach this. Seeing the different ideas in this issue which often just span a few lines to integrate makes it clear IMO that we don’t need to expand ASP.NET Core by default and send the wrong message that it’s okay to do this. Instead, people can just copy over the relevant pieces into their own code, making this behavior and the downsides (which should also be clearly documented) very explicit.

As for libraries like Restier that want to rely on this: Usually, these libraries will need some kind of integration package for ASP.NET Core anyway, so they could always add their own capabilities there. If you control the full library, switching to a custom interface to access the principal should be possible too. And if you really need, the library could provide a way to set up Thread.CurrentPrincipal or similar.

As a framework, ASP.NET Core and .NET (Core) itself should make it clear that the static .Currents are meant to be legacy though. So I would agree that we should avoid advertising their usage here by providing quick opt-ins that will be seen as an invitation to enable it by default.

robertmclaws

comment created time in 5 days

issue commentdotnet/runtime

Finalize logging analyzers

Does that mean that the analyzers are on track for 6.0? That's great news! We've been hit by the issue from diagnostic 5 a few times in production so this will help a lot. Thanks for following up with this @maryamariyan!

pakrym

comment created time in 9 days

issue commentnatemcmaster/CommandLineUtils

Environment for generic host integration

Please keep it open for now. I forgot about this but would still like to give it a shot eventually.

poke

comment created time in 13 days

issue commentmicrosoft/azure-pipelines-agent

Agent installer does not support group managed service accounts (gMSA)

@tastyeggs I wouldn't go that far. Yes, it is a bit annoying that you cannot set up the service to use a gMSA during setup but it's not impossible. As mentioned earlier you can reconfigure the created service to use a different account after it has been created.

So yes, an annoyance but not a super critical security issue.

That being said, I'm still willing to contribute a fix.

michael-baker

comment created time in 14 days

issue commentdotnet/roslyn-analyzers

CA1508: false positive when when combining null-coalesce with using statement

I have a similar example that also produces a false CA1508 in a using statement:

public void Test()
{
    var stream = GetStream();

    if (stream == null)
        return;

    using (stream) // <- 1508 here
    {
    }
}

private Stream GetStream() => null;

If you comment out the stream == null check, the suggestion disappears because stream now could be null of course. But the using statement shouldn’t be affected by this in the first place.

innix

comment created time in 18 days

pull request commentmanfredsteyer/angular-oauth2-oidc

Support custom grant types

For those still following this issue: Support for custom grant types was just merged via #919.

poke

comment created time in 21 days

pull request commentmanfredsteyer/angular-oauth2-oidc

Custom grant type added

Not to be unthankful for getting this change finally merged but I would have appreciated some kind of attribution or thanks considering that this change is completely based on my pull request #344 from three years ago (which I never updated again because I received no feedback whatsoever on it).

Anyway, thanks for giving attention to this library after all this time.

alexandis

comment created time in 21 days

issue commentdotnet/aspnetcore

ASP.NET Core 6 and Authentication Servers Discussion

@JeepNL ASP.NET Core Identity is a very different thing than IdentityServer. The former is not going away so if that's what you are talking about, you are misunderstanding this thread.

blowdart

comment created time in a month

issue commentdotnet/aspnetcore

ASP.NET Core 6 and Authentication Servers Discussion

@JeepNL

specially now that .NET 5/6 is attracting so many new devs

Especially new devs should avoid thinking that they need a custom authentication server. Despite what everyone appears to think, having to roll your own identity provider is a special thing one should only do if you actually know and understand the consequences (and security implications).

@the-black-wolf I understand your criticism with the .NET Foundation but expecting the .NET Foundation to “just” ship these things for everyone to use free of charge is also not a solution. Who do you suppose should pay for this? Because this very critical work isn't coming for free and requires a lot of work and dedication.

We got into exactly this situation exactly because all those consumers of open source not giving a shit about the maintainers they continued to rely on. Don't compare this with other foundations which exist with a more healthy open source ecosystem where commercial offers are totally normal and accepted. But no, when this happens in .NET, people complain that Microsoft should provide these things for free — but without them looking for avenues to actually sell anything.

blowdart

comment created time in a month

issue commentdotnet/core

Will .Net 6 come pre-installed in Windows 11?

@terrajobst

we're trying really hard to avoid this.

The usual reasoning I hear behind this is that shipping it with the OS means that it is bound to the same update cycles as the operating system. However, there are multiple examples that contradict this. For example the Calculator app and notepad which are both bundled with Windows but were somewhat recently announced to receive updates through the Windows Store.

There also is a separate thing that already makes it possible to receive updates for .NET Core via Microsoft Update, independent of the operating system.

So why is it not possible to ship .NET Core with the operating system while also providing OS-independent updates through one of these channels? Or at least have some kind of shim like that super weird python executable that triggers an install via the store.

Having a runtime installed by default would really help the adoption of .NET (Core) in the desktop space. Yes, it is possible to have self-contained deployments but that also means dealing with application sizes way above 50 MB which is a lot, especially for simpler applications (and no, AOT isn't there yet).

For what it's worth, I still build smaller WinForms or WPF applications using the .NET Framework because it can be shipped a lot easier and usable on every Windows machine by default. But it's sad that I have to do this.

shrinidhi111

comment created time in a month

issue commentMicrosoftDocs/vsts-rest-api-specs

Child section 'Overview' pages don't exist

The same thing happens with apparently every section inside the “Git” category. Each overview page links to a .yml file when the link should be without that extension.

E.g.

  • https://docs.microsoft.com/en-us/rest/api/azure/devops/git/Pull%20Requests.yml?view=azure-devops-rest-6.0 This is the link from the navigation which results in a 404.
  • https://docs.microsoft.com/en-us/rest/api/azure/devops/git/Pull%20Requests?view=azure-devops-rest-6.0 This is the same link without the .yml which apparently does not exist but at least auto-redirects to the 4.1 docs.
  • https://docs.microsoft.com/en-us/rest/api/azure/devops/git/pull%20requests?view=azure-devops-rest-4.1 This is a fully working link from the 4.1 documentation.

Note that the links only break for versions after 4.1.

flcdrg

comment created time in a month

issue commentdotnet/roslyn-analyzers

CA1812 - Generates false positive when using a type as a type parameter

So it seems that the case with the new constraint is handled.

I disagree. Generic type arguments are very often used in serialization settings but serializers will usually not use any type constraints mostly because they won’t rely on the generic typing to create the object.

For example, the following examples all cause the CA1812 warning:

// Deserializing with System.Text.Json
JsonSerializer.Deserialize<InternalType>("…");

// Deserializing with Newtonsoft.Json
JsonConvert.DeserializeObject<InternalType>("…");

// Querying a database using Dapper
connection.Query<InternalType>("…");

I personally hit this with the third use case, since I was declaring an internal type that would only be used for a single query result with Dapper which would then be post-processed into a different (public) type. I wouldn’t be surprised if the analyzer would also detect other situations, e.g. with key-less entity types in EF Core (previously called query types) which are commonly used for read-only queries, e.g. against views or stored procedures.

CoolDadTx

comment created time in 2 months

PullRequestReviewEvent

Pull request review commentdotnet/designs

Workload Target Imports Design

+# .NET Workloads Target Import Ordering++**Owners** [Sarah Oslund](https://github.com/sfoslund) | [Daniel Plaisted](https://github.com/dsplaisted)++## Background++To support .NET SDK workloads, we [changed the order of targets imports](https://github.com/dotnet/sdk/pull/14393) to allow SDK workloads to change property defaults. When we did this, we also changed the import order of some Windows and WPF targets, as we want to make it a workload in the future, which caused a breaking change. It appears that it is not possible to allow workloads to change property defaults and make support for Windows a .NET SDK workload without introducing a breaking change. As a result, this document explores possible solutions to minimize the user impact.++### Original Ordering Change++Originally, the Windows, WindowsDesktop, and workload targets were imported at the end of `Microsoft.NET.Sdk.targets`, which was almost the last files imported.  However, this was not an appropriate place for the imports if those targets were to override default property values that were set in .NET SDK or MSBuild common targets.  The workload needs a chance to set the property if its not already set before the default logic would do so.++Because of this, the workload targets import [was moved](https://github.com/dotnet/sdk/pull/14393) to come after the target framework parsing.  This is because whether a workload is used (and hence needs to be imported) may depend on the target framework or platform, so those conditions should go after the `TargetFramework` has been parsed into the corresponding component properties (`TargetFrameworkIdentifier`, `TargetFrameworkVersion`, `TargetPlatformIdentifier`, and `TargetPlatformVersion`).++The Windows and WindowsDesktop targets were moved together with the workload targets import, as we expect them to eventually become part of a workload.  However, this moved those imports before the import of `Directory.Build.targets`, which meant that properties set in that (such as `UseWPF` and `UseWindowsForms`) would no longer take effect.++This is an issue not just because it's a breaking change, but because it means that `Directory.Build.targets` can never be used to set a property that determines whether a workload is used.++### MSBuild Importing Context++#### MSBuild Evaluation++We are primarily concerned with "[Pass 1](https://github.com/dotnet/msbuild/blob/6f9e0d620718578aab8dafc439d4501339fa4810/src/Build/Evaluation/Evaluator.cs#L613)" of [MSBuild Evaluation](https://docs.microsoft.com/en-us/visualstudio/msbuild/build-process-overview#evaluation-phase), where properties are evaluated and project imports are loaded.++Some properties depend on other properties, so the order in which they are declared matters. For example, the default value for the `GenerateDependencyFile` property depends on the `TargetFrameworkIdentifier` property, which itself is derived from the `TargetFramework` property. So these declarations need to be evaluated in the correct order for the values to be set correctly.++Likewise, project imports (typically used to import `.targets` or `.props` files) can be conditioned on property values. So if WindowsDesktop targets are imported if `UseWPF` or `UseWindowsForms` is true, then those properties need to be set before the WindowsDesktop project import is evaluated.++The evaluation ordering of MSBuild targets is usually not important. The order that the targets are executed is determined by the dependencies between them, not by the order they come in evaluation. The exception to this is that you can override a target by defining another target with the same name later in evaluation.++#### Exploring Import Order++There are many `.props` and `.targets` files that get imported when building a .NET project. One way to see what is imported and in what order is by preprocessing a project, either with the `-pp:` [command line argument](https://docs.microsoft.com/en-us/visualstudio/msbuild/msbuild-command-line-reference) or with [MSBuild Structured Log Viewer](https://msbuildlog.com/). This will create a single aggregated project file with all project imports expanded inline.++It is also possible to explore the project imports in the Visual Studio Solution Explorer. Clicking on the **Show All Files** button will add an **Imports** node to the solution tree under the project. You can expand this node and explore the tree of imports active in the project. You can also double click on an imported file to open it up in the editor and view its contents.++![Project Imports in Solution Explorer](./solution-explorer-project-imports.png)++#### Order of Imports in the .NET SDK++The following is a simplified list of the files that are imported in an SDK-style .NET project:++- `Directory.Build.props`+- Main project file+- Version logic (`Microsoft.NET.DefaultAssemblyInfo.targets`)+- Output path logic (`Microsoft.NET.DefaultOutputPaths.targets`)+- Publish profile+- Target Framework parsing (`Microsoft.NET.TargetFrameworkInference.targets`)+  - Also appends target framework to output and intermediate paths+- Runtime identifier inference (`Microsoft.NET.RuntimeIdentifierInference.targets`)+  - Also appends Runtime Identifier to output and intermediate paths+- Workload targets imports+- Language targets (e.g. `Microsoft.CSharp.targets`) and MSBuild common targets+- `Directory.Build.targets`+- (Rest of) .NET SDK targets+- Old location for Windows, WindowsDesktop, and workload targets imports++#### `Directory.Build.targets` import location++Conceptually, `Directory.Build.targets` is imported after the body of the main project file. The exact location it is imported is not something most developers likely think about, but it is a good place to put common build logic that depends on properties set in the project file, such as the `TargetFramework`.++There's not a perfect place to import `Directory.Build.targets`. However, given what we've learned, it may be that the best place to import it is after the TargetFramework parsing, and before the workloads are imported.  That way the logic in `Directory.Build.targets` would still be able to depend on the parsed TargetFramework information, but would not be able to override all of the targets and properties set by the .NET SDK and MSBuild common targets that it can today.++## Proposed Solutions++The following are the current proposed solutions to the problem outlined above, to be reviewed by the community.++### Extension Point via Property++Support an `AfterTargetFrameworkInferenceTargets` property. This property could be used by creating a `Directory.AfterTargetFrameworkInference.targets` file and putting the following in the `Directory.Build.props` file located in the same folder:++```xml+<AfterTargetFrameworkInferenceTargets>$(MSBuildThisFileDirectory)Directory.AfterTargetFrameworkInference.targets</AfterTargetFrameworkInferenceTargets>+```++#### Pros:++- Simple to implement+- Low compat and performance impacts+- Matches existing `BeforeTargetFrameworkInferenceTargets` property++#### Cons:++- Doesn’t match existing `Directory.Build.props` and `Directory.Build.targets` pattern, or the general principle of the SDK which is to have sensible convention-based defaults that can be overridden++### New Automatically Imported .targets File++Automatically find and import a `Directory.AfterTargetFrameworkInference.targets` file.++#### Pros:++- Matches existing convention for `Directory.Build.props` and `Directory.Build.targets`+- Probably closest to what developers would expect++#### Cons:++- Possible performance impact+- Adds another way for a build to “leak out” of a repo root (which could be a security issue)+- More extension points may be needed in the future, and we probably don’t want to add a new auto-imported file for each one++A possible mitigation for some of the cons could be to say that we don’t look for the new file to import independently. Rather, we could say that you need to have a `Directory.Build.props` file and it needs to be in the same folder as `Directory.Build.props`. So a `Directory.AfterTargetFrameworkInference.targets` file outside the repo root wouldn’t automatically be imported unless a `Directory.Build.props` file was going to be imported from outside the repo anyway.++### Change import location of `Directory.Build.targets`++Change`Directory.Build.targets` to be imported after TargetFramework parsing but before workloads and most of the .NET SDK and common targets. This would be a big breaking change but might not affect most people who use `Directory.Build.targets`. It would break people who use `Directory.Build.targets` to override properties or targets in the MSBuild common targets.

Do we have a list of which targets would be affected here? (Alternatively, a link to the files, so we could check if this affected any of our use cases?)

sfoslund

comment created time in 2 months

issue commentdocker/for-win

Unable to bind ports: Docker-for-Windows & Hyper-V excluding but not using important port ranges

I hit this today too, possibly after updating to the latest Docker Desktop version (at least it was still working on Friday, and updating Docker was the first thing I did today). It caused port access problems while attempting to use IIS Express. After searching for quite a while, I found some description about those reserved ports and that Docker and/or Hyper-V are maybe a possible cause of this.

What (likely) ultimately fixed it for me were the instructions detailed above by @iamsurfing. Even though I did verify that the port ranges were identical before. Guess the order combined with restarting the NAT service helped (even though I did reboot in between).

veqryn

comment created time in 2 months

created tagpoke/Westerhoff.AspNetCore.TemplateRendering

tagv1.0.0

Razor template rendering built on top of ASP.NET Core

created time in 2 months

create barnchpoke/Westerhoff.AspNetCore.TemplateRendering

branch : main

created branch time in 2 months

created repositorypoke/Westerhoff.AspNetCore.TemplateRendering

Razor template rendering built on top of ASP.NET Core

created time in 2 months

issue commentmicrosoft/terminal

Git output oddly indented and sometimes causing crash with TERM=msys

I unfortunately cannot use the Feedback hub on this machine since my domain policies prevent me from changing the data privacy settings that are required to use it… I can try to replicate this during the weekend on a different computer.

For what it’s worth, I couldn’t replicate the crash just now (of course not… 🙄) but here you have a screenshot of the funny output:

Screenshot of the oddly indented output

poke

comment created time in 2 months

issue commentdotnet/csharplang

Can't use C# 8.0 using declaration with a discard

That being said, I'm not really sure what the use case for using (GetSomethingDisposable()); would be in the first place when you can just call .Dispose().

The use case is pretty simple and has been already mentioned before in this thread. Sometimes, there are things that are opened that affect other things until they get disposed. These could be things like logging scopes or database transaction scopes. You don’t interact with those, except that you create them and dispose them at the end:

public void DoSomething()
{
    using (_logger.BeginScope("Working on something"))
    {
        _logger.LogInformation("Done step 1");
        _logger.LogInformation("Done step 2");
    }
}

This would be perfectly fine with a using statement. BeginScope even returns just an IDisposable which you don’t need to do anything except dispose it when you want to close the scope.

Now, the reason for the introduction of using expressions was to reduce the level of nesting. In a case where the using block spans the whole method, that makes perfect sense. So we could rewrite the above to the following and get rid of an indentation level:

public void DoSomething()
{
    var scope = using _logger.BeginScope("Working on something");
    _logger.LogInformation("Done step 1");
    _logger.LogInformation("Done step 2");
}

This still works just fine but now you have introduced a scope variable that you don’t do anything with. The disposal implicitly already happens at the end of the method without you having to do anything. So you don’t need that variable, other than to satisfy the compiler here.

Fortunately, the analyzers no longer suggest code changes here. Before, it would “complain” about the first example not being a using expression, and the second example having an unused variable.

Ideally, the compiler would just allow a discard here though. That way, we could have the benefit of the using expression without having to mentally work with a variable we don’t use.

_ = using _logger.BeginScope("Working on something");

I would agree that other syntaxes are likely too close to the using statement and would be too confusing here.

RayKoopa

comment created time in 2 months

issue commentdotnet/aspnetcore

ASP.NET Core and SPAs in .NET 6

When the ASP.NET Core app is launched, the front-end development server is launched just as before, but the development server is configured to proxy requests to the backend ASP.NET Core process. All of the front-end specific configuration to setup proxying is part of the app, not ASP.NET Core.

Reading through the announcement post, it occurred to me that I misunderstood this feature from the beginning. This new approach is actually very different from how it was before.

I’m honestly not sure if relying on a proxy feature in the client-side development server is a good idea or not. I do have multiple issues with this:

  • The setup is now very different from the production situation where the client-side application is served by the ASP.NET Core server.
  • This requires the use of a client-side development server that supports proxying. This might not be an issue currently where almost everything (unfortunately) relies on Webpack but this might change in the future and proxy capabilities might not always be given. So this is almost feels like a vendor lock-in for Webpack here.
  • Complex authentication situations, like Windows authentication or certificate-based authentication, are likely not supported now at all. This would at least break my current project.
  • Advanced Kestrel features might not be supported at all with a Node-based proxy in front of it. Things like HTTP/2 or even the newer HTTP protocols come to mind here. This might affect things like Web gRPC.

Overall, I don’t really trust a Node development server to be really capable enough for advanced situations. Sure, it likely works for the simple case but I will have to test out if this approach would work for my situations. It does put a lot of dependency on the SPA development server though, which is a move I don’t really like.

That being said, is the reversed proxy setup still available, where ASP.NET Core is the primary server which proxied to the SPA dev server? I will have to continue to rely on that, and this issue didn’t read as an announcement of a breaking change yet.

LadyNaggaga

comment created time in 2 months

issue commentdotnet/runtime

Add Big O notation to LINQ/Collections methods xml doc

I believe a rough outline, similar to how Python does it in their wiki would be very useful. It doesn't need to attempt to be complete, and still could list some good-to-know corner cases (e.g. Contains on sets, or the OrserBy.First optimization).

I don't think this should be on the LINQ method's own doc page, simply because then there would be an expection that it's both complete and accurate, but maybe we can collect this and place it as some aside article into the docs.

HavenDV

comment created time in 2 months

issue openedmicrosoft/terminal

Git output oddly indented and sometimes causing crash with TERM=msys

Windows Terminal version (or Windows build number)

1.7.1033.0

Other Software

Using Git for Windows version 2.31.1.windows.1 with the bundled vim v8.2.1895 with PowerShell 5.1.19041.906.

Steps to reproduce

In a new folder, enter the following commands after another:

PS C:\TerminalIssueRepro> $env:TERM = 'msys'
PS C:\TerminalIssueRepro> git init
PS C:\TerminalIssueRepro> 'bin/' > .gitignore
PS C:\TerminalIssueRepro> git add .\.gitignore
PS C:\TerminalIssueRepro> git commit

vim opens to prompt for a commit message, so enter something there, save and exit.

Expected Behavior

The console window should look like this:

PS C:\TerminalIssueRepro> git init
Initialized empty Git repository in C:/TerminalIssueRepro/.git/
PS C:\TerminalIssueRepro> 'bin/' > .gitignore
PS C:\TerminalIssueRepro> git add .\.gitignore
PS C:\TerminalIssueRepro> git commit
[main (root-commit) abe2cba] Initialize
1 file changed, 0 insertions(+), 0 deletions(-)
create mode 100644 .gitignore
PS C:\TerminalIssueRepro> 

Actual Behavior

Instead, due to the TERM=msys, the output looks like this:

PS C:\TerminalIssueRepro> git init
Initialized empty Git repository in C:/TerminalIssueRepro/.git/
PS C:\TerminalIssueRepro> 'bin/' > .gitignore
PS C:\TerminalIssueRepro> git add .\.gitignore
PS C:\TerminalIssueRepro> git commit
[main (root-commit) abe2cba] Initialize
                                        1 file changed, 0 insertions(+), 0 deletions(-)
                                                                                        create mode 100644 .gitignore

This can be reproduced by re-triggering vim for editing, e.g. using git commit --amend. Sometimes, the terminal process completely crashes after closing vim, e.g. with this output:

PS C:\TerminalIssueRepro> git commit --amend
[main 27a142f] Initialize
                          Date: Sat May 22 14:20:41 2021 +0200
                                                               1 file changed, 0 insertions(+), 0 deletions(-)
                                                                                                               create mode 100644 .gitignore
[Verarbeitung des Prozesses mit Code 5 beendet]

(Eror message says “Processing of process ended with code 5”)

If I switch to the TERM mode cygwin (the default I believe?), then the output will look ok in that there is no odd indentation and the process won’t crash. However doing so means that after exiting vim, the screen will be completely blank above the generated output, i.e. the vim process blanks the previous console output.

In ConEmu, using TERM=msys works without any problems, so does it in the native PowerShell host or cmd.exe.

created time in 2 months

issue commentmicrosoft/azure-pipelines-agent

Agent installer does not support group managed service accounts (gMSA)

I just hit this as well, using the interactive configuration. The configuration wizard is able to pick up the service account correctly, but then the validation for the password fails because passwords for managed service accounts are empty:

Enter run agent as service? (Y/N) (press enter for N) > y
Enter User account to use for the service (press enter for NT AUTHORITY\NETWORK SERVICE) > domain.test\vsts-agent$
Enter Password for the account domain.test\vsts-agent$ >
Enter Password for the account domain.test\vsts-agent$ >
Enter Password for the account domain.test\vsts-agent$ >
Enter Password for the account domain.test\vsts-agent$ >
…

As far as I can tell, the validation happens here:

https://github.com/microsoft/azure-pipelines-agent/blob/66e6e9a9aa4503139f5dcf4a9894ce349b68e0ab/src/Agent.Listener/CommandSettings.cs#L389-L397

Here, the validator Validators.NonEmptyValidator is being used. This may make sense for usual accounts (although I think technically user accounts can have empty passwords too?) but for managed service accounts, the password will always be empty.

I think a quick fix would be to simply remove that validator there. If it is desired to keep the validator for normal accounts (assuming that empty passwords are indeed an accident), then an alternative idea would be to check for a managed service accounts over here:

https://github.com/microsoft/azure-pipelines-agent/blob/66e6e9a9aa4503139f5dcf4a9894ce349b68e0ab/src/Agent.Listener/Configuration.Windows/WindowsServiceControlManager.cs#L69-L74

The code there already checks for well-known accounts, skipping the password prompt. The same could be done for managed service accounts. A simple check, that would avoid talking to the AD first, would be to check for a trailing dollar sign ($) since managed service accounts will end with that when used.

Let me know if you want me to contribute any of these changes, and I will happily do so.


The agent uses OAuth tokens per build to access resources. Is there a specific reason you need the service to run as a gMSA account instead of network service or local system? Our goal is the account the agent runs as is mostly irrelevant. I'm interested in your scenario.

The agent uses OAuth to talk to Azure DevOps, yes. But this is about the permissions the process executing the jobs runs with. If you run your agent with the default NT Service account, then you are essentially giving it administrator privileges. It shouldn’t be surprising that we don’t want to give administrator access to a process that locally runs arbitrary code that is checked in remotely. So using a separate account gives us control over what the process can do, and what resources it can access. And using gMSA for that is considered a good practice.

michael-baker

comment created time in 3 months

issue commentdotnet/aspnetcore

ASP.NET Core and SPAs in .NET 6

@Drabenstein I wouldn’t expect the backend to kill the npm process at all. Since the npm command is spawned in a separate window (as seen in the GIFs), that window can just stay open when the server process stops or restarts. It would actually feel weird to me if there is a separate window with the npm process and that process gets killed automatically when the server exits.

But yes, if the server process is able to detect and connect to an npm development server that is launched separately, then that would be fine with me too. As long as I don’t need to make code changes to support both proxying to an existing dev server and launching that separately if you just start the server, then I am totally fine with that.

I just think that it would still be better if the default behavior would be improved, and I think that only works if the npm process stays alive after the backend server stopped.

LadyNaggaga

comment created time in 3 months

issue commentdotnet/aspnetcore

ASP.NET Core and SPAs in .NET 6

@LadyNaggaga Thanks for the clarifications.

So it’s mostly the same as it is now (e.g. with the UseReactDevelopmentServer()) just that there is a separate window now.

To be completely honest, I don’t really see much benefit there. Building a client-side application is usually very slow on first launch (especially for Angular projects). So what you usually want to do in a development scenario is to run both the server and the client separately so changes on one end will not force a recompilation of the other. That way, when you make changes on the client-side, the running Webpack watcher is able to compile just the changed files very quickly. But when the client-side build pipeline is coupled to the server backend, then that means that with every backend change you will have to restart the slow client-side development process too.

I think this would already be solved if the backend would just not terminate the spawned client-side dev server process. That way, the backend is able to handle both situations (when a dev server is already running, and when it isn’t), and you get huge speed benefits when restarting the backend.

LadyNaggaga

comment created time in 3 months

issue commentdotnet/aspnetcore

ASP.NET Core 6 and Authentication Servers Discussion

I believe you're employer will prefer payed 10 time more and be sure that he has control to the life of the solution and impact of the license.

Absolutely not.

A public vetted solution will always win over a selfmade attempt in an authentication context. If you believe otherwise, then you are heavily underestimating the complexity involved with authentication. And having a commercial option here is even preferred because you get commercial support for it. That's usually a very valuable benefit, especially to larger businesses.

But we can't have anything in our dev that ask a licensing server or a recurrent pricing. Whatever the price is. We tried once it was a nightmare.

Then good luck considering everything is already moving to subscription models…

They earn $9,000 per year for each without counting support and plugin.

I don't understand. Are you arguing that this is… a lot?! Because it is really low, considering the work that goes into the project. And that money came only from donations, from 75 entities. Great.

But sure, continue benefitting from other's work in your big enterprise, and keep on complaining once they ask for anything back.

blowdart

comment created time in 3 months

issue commentdotnet/aspnetcore

ASP.NET Core 6 and Authentication Servers Discussion

@Ponant

Today you won't make money with software, OSS or not.

Uhm, what? I do earn money by building software. And I utilize a lot of open source libraries as a way to save my own time.

So I am personally very happy to see more maintainers moving to a paid model. This means that it will be easier for me to argue to both my employer and my customers that we should give back money to those maintainers. Simply because there’s no way around it. And it won’t actually be a problem them because the time it would cost me to build (=learn, build, maintain, support and documentation) this from scratch would cost them way more.

blowdart

comment created time in 3 months

issue commentdotnet/aspnetcore

ASP.NET Core 6 and Authentication Servers Discussion

@GeraudFabien

That why they can make you pay 1500 + 300 * UserCount usd by year

I think you are misunderstanding the word “client” in the IdentityServer pricing. Client refers to OAuth clients, i.e. an application that is registered with the server and can authenticate the user. I do not believe that there is any kind of user restriction in the Duende licensing, probably because IdSrv doesn’t actually care about users.

1500 by year alone is more than VS and azure/AWS and CI budget on most team i know

That sounds odd considering that VS Professional is already $500 per year per person. A IdentityServer license is probably not that much of an issue as you think it is. And if you have such a small team already, I would suggest you to actually rethink if you even need your own authentication server. Chances are that you shouldn’t roll your own anyway.

But there is other solution like :

  • a partenaria with keycloak (I never used keycloak but from what i see it's the only OSS solution supported now).

There is no need for a partnership since there doesn’t actually need to be any kind of connection between these. Just install KeyCloak (or set up any other authentication provider really), and configure it according to their documentation. And then follow the ASP.NET Core documentation and configure OpenID Connect or JwtBearer authentication. That way you can have your app authenticate with almost any other authentication provider, that being IdentityServer, KeyCloak, AAD, Auth0, Google, whatever.

IdentityServer being integrated within ASP.NET Core is a particular detail that is very likely overkill for most people.

  • Document to help us implement a solution for small project (project where 1500 usd is actually too much)

If $1500 is already too much for you, then chances are that the special licensing terms with the Community Edition of IdentityServer would be good enough for you.

Today there is not community project so conversation could be re-open. Identity server was the main argument. Now it's dead the situation change.

That is just wrong. The other popular alternative OpenIddict was already around during the last discussion, and it is still around now. So you can pick that now, if you are having troubles adopting IdentityServer’s new licensing terms.

But IdentityServer is neither dead nor is it no longer a community project.

@Ponant

So MS could buy them out at with a 5 years return, which means at 1,250,000 USD.

Suggesting that MS should just buy them is a very bad take and would actually hurt the .NET ecosystem very much. The community is already struggling a lot trying to make OSS sustainable. Having Duende succeed here would actually show that we as a community are able to establish sustainable OSS projects. We need more projects like IdentityServer and ImageSharp to normalize paying for the labor of others when companies use it to earn money.

blowdart

comment created time in 3 months