The .NET Thread: For all things .NET

Jonathon

Ars Legatus Legionis
16,541
Subscriptor
Do we have any Roslyn/.NET Compiler Platform gurus around?

I've got some stuff (involving compiling and running code at runtime) that I've got a working prototype for, but some of the ways that I'm (ab)using the C# scripting API feel like I'm kind of missing something, particularly around actually instantiating classes declared by (but not returned by) my input scripts. And the Roslyn docs are annoyingly sparse for anything outside of their basic code analysis or code transformation use cases.
 

svdsinner

Ars Legatus Legionis
15,093
Subscriptor
I'm trying to get a solid understanding of when in an ASP.NET Core app to make an application service and when to not.

Am I correct in simplifying this to: anything requiring dependencies should be written as an application service inside the dependency injection container and shared code that does not have dependencies (and is not a dependency of anything in the dependency injection container) should be housed in another way? (Static class, singleton, extension methods, etc.) IOW, is there any real reason why things without dependencies should be constructed inside the dependency injection container's services?
 

hanser

Ars Legatus Legionis
41,687
Subscriptor++
The point of a composition root is that it's the composition root. All dependencies should be constructed there. If you're new-ing up something in a constructor, just stop.
  • Static classes should be used for extension methods, or to hold bags of constants + associated convenience methods, and nothing more.
  • Singletons should just be written as regular classes, no exceptions. They shouldn't be noteworthy under any circumstances. (Another name for noteworthy is "stateful".) You should be able to trivially put them behind an interface and mock them. If I saw a Class.Instance.DoSomething in a code review, I would nuke it from orbit because composition roots are A Thing.

 

Ardax

Ars Legatus Legionis
19,076
Subscriptor
The point of a composition root is that it's the composition root. All dependencies should be constructed there. If you're new-ing up something in a constructor, just stop.
Winner winner chicken dinner.

The reason things without dependencies go into the DI container is because they're (presumably) dependencies of something else. Otherwise... why does the code exist at all?
 

svdsinner

Ars Legatus Legionis
15,093
Subscriptor
The reason things without dependencies go into the DI container is because they're (presumably) dependencies of something else. Otherwise... why does the code exist at all?
  • Static classes should be used for extension methods, or to hold bags of constants + associated convenience methods, and nothing more.
I think this is probably the ultimate answer. I had to think long and hard to think of any shared code where I wouldn't need to have (at minimum) a dependency on the Logging service. The few examples I could come up with are things like extension methods, bags of constants and convenience methods. Static classes still have a place, but most real code will need to be a service so that it can do necessary things like logging.

Thanks for the sanity check, guys!
 
  • Like
Reactions: Ardax

LID919

Ars Centurion
285
Subscriptor
I'm supporting a very old, very large .NET Framework 4.8 project. There is significant interest within the engineering teams to upgrade it to modern .NET.

Has anyone ever performed such a large upgrade? Any particular caveats or gotchas?

The project consists of three or so Windows Forms projects building up the UI, and two dozen or so class library projects with all kinds of different application logic.

I'm guessing the class libraries will be much easier to upgrade than the UI will.

The project is also currently targeting 32 bit because it relies on some 32 bit native libraries which no one has ever managed to successfully upgrade to 64 bit.
 
  • Like
Reactions: svdsinner

svdsinner

Ars Legatus Legionis
15,093
Subscriptor
I'm supporting a very old, very large .NET Framework 4.8 project. There is significant interest within the engineering teams to upgrade it to modern .NET.
I'm just digging into the meat of a project like this. (Written over a dozen POCs and spent dozens of meetings convincing senior management to green light the project over the last 3-4 years. It is finally green lit and I'm finally starting on the ACTUAL codebase! :cool: )

Here are some thoughts:
1) Begin with future maintenance in mind. Don't violate YAGNI, but keep things modular with lots of automated testing so that you won't end up in Regression Test Hell.
2) Anticipate that the core logic might still be used for 10-20 years. But anticipate that the UI will need to be updated/reskinned every 3-4 years.
3) Start with clearly identifying the deep, core issues with the original that you want to get fixed. If there are none, stop here, do not pass go. Keep the list reasonable. (Example: On my project we identified 4 things: an atrocious database schema that needed reworked, a horrible lack of logging, business logic that was randomly spread throughout different places in the app, and outdated security mechanisms)
4) Swallow hard and accept that unless you do things better than the previous team, you will simply be exchanging the poor choices of the previous app with a new set of poor choices in your new app. You should have regular discussions of "Is this is least stupid way we can do this" and encourage everybody to be able to point out flaws in your plan so that they don't turn into flaws in the final product. Push yourselves that nobody will show up in 5 years, see your app and go: What idiot designed this?"
5) Keep your app as simple and (modern) standards based as you can. You're first thought should be "Can this be done in the manner that the .NET Core framework suggests?" Always try to do things "the Microsoft Way" instead of reinventing the wheel. DON'T copy/paste code from the old that has far better, more modern ways to be done.
6) Under no circumstances allow the project to fail to deliver SOMETHING TANGIBLE TO MANAGEMENT in a reasonable time frame. No matter what anyone tells you, important people with have an invisible timer counting down in their head. If it goes off before your team has been able to show significant deliverables, your projects will get really ugly, really fast.

And, just to reiterate point 3: DO NOT just change an important app to modern .NET just because it is newer and cooler. If you aren't also improving the app, you are taking on major risk with little to no rewards.
 
  • Like
Reactions: Ardax

Ardax

Ars Legatus Legionis
19,076
Subscriptor
Swallow hard and accept that unless you do things better than the previous team, you will simply be exchanging the poor choices of the previous app with a new set of poor choices in your new app.
Or worse, you're compounding those poor choices by adding your own on top.

There is significant interest within the engineering teams to upgrade it to modern .NET.
The more important question is: Can your teams articulate the business benefits to management in a language they can understand?

If, (and "if" is going to be doing a whole lot of heavy lifting here), your application architecture is in decent shape, you should be able to baby-step this by upgrading the class libraries to target netstandard2.0 and working outward from there.

If you don't already have great test coverage, that's going to need to be fixed first.

And, just to reiterate point 3: DO NOT just change an important app to modern .NET just because it is newer and cooler. If you aren't also improving the app, you are taking on major risk with little to no rewards.

To be somewhat clear, sometimes just being able to write a new UI in modern ASP.NET Core and be able to deploy your app on non-Windows servers is enough of a justification. But yes, this needs to be something a lot more than "the geeks wanna write with the new shiny" before you're going to get the green light for this.
 

svdsinner

Ars Legatus Legionis
15,093
Subscriptor
To be somewhat clear, sometimes just being able to write a new UI in modern ASP.NET Core and be able to deploy your app on non-Windows servers is enough of a justification. But yes, this needs to be something a lot more than "the geeks wanna write with the new shiny" before you're going to get the green light for this.
Excellent clarification. Preventing the code from leaving support is absolutely a business justification. However, .NET Framework 4.8 is going to remain in support for several years to come so that might not be his best pitch.

If you don't already have great test coverage
Poor test coverage can be turned into the effective "Remember that release you wanted to go faster but all the required manual testing took 6 weeks and you hated how long it took? We can fix that"
 

Ardax

Ars Legatus Legionis
19,076
Subscriptor
Preventing the code from leaving support is absolutely a business justification. However, .NET Framework 4.8 is going to remain in support for several years to come so that might not be his best pitch.
Fair. It's just more along the lines of "we want to write web front ends to replace the WinForms apps and deploy them to the cloud" that gives a lot of leverage for porting up instead of running Windows VMs in Azure to run on IIS. I wouldn't develop a new ASP.NET app on .NET Framework in 2023 unless it could trivially be ported to run on current .NET.
 

LID919

Ars Centurion
285
Subscriptor
Updated our ancient dependency injector yesterday.

It was originally written for a very old version of .NET Framework. Kept alive to support a massive Windows Forms app. It works well to provide a class implementation to an interface based on some runtime metadata.

However, it lacked the ability to provide dependencies to dependencies. Any class which fulfilled an interface for this purpose needed to have a paramaterless constructor.

I cracked the damn thing open and got down and dirty with .NET reflection to construct dependencies of dependencies.

Now it does analyze the constructors of a discovered class, find the simplest to satisfy constructor, and constructs that one first. It does so to an arbitrary depth.

It could still use some improvement, check for cyclic dependencies or other potential issues, but it's working and I'm quite happy with the results.
 
  • Like
Reactions: svdsinner

hanser

Ars Legatus Legionis
41,687
Subscriptor++

Arbelac

Ars Tribunus Angusticlavius
7,449

ShuggyCoUk

Ars Tribunus Angusticlavius
9,975
Subscriptor++
our place is going all in on Linux.

This is good.

But my inadequate knowledge of the low level tooling is not. Previously I could go great things with VS, WinDbg and SoS if need be. Now I have Rider. I love it as an IDE, but the debugging tool chain blows for anything complex (no mixed mode)

Apparently some has done SoS for llvm’s debugger.

Anyone got any good guides for someone with a clue in the old windows world but little of tge linux one for diagnosis of pretty low level stuff in a dotnet app where I also care about the unmanaged heap and the like.

Same goes for decent profilers that can bridge the gap.
 

ShuggyCoUk

Ars Tribunus Angusticlavius
9,975
Subscriptor++

hanser

Ars Legatus Legionis
41,687
Subscriptor++
So we have a bunch of libs and services that are not making use of nullable reference types, and a bunch that are. (Mostly due to when they were "born" so to speak.) For the last couple of weeks, I've been picking at the older ones, and getting them ready for a .NET 6 to .NET 8 migration. Basically that has amounted to making use of our naming conventions, implicit usings, warnings-as-errors(*) with 4 exceptions; and simplifying build files.

(*) No idea why this wasn't on in the first place. I had assumed it was a default setting; I guess not.

Basic takeaways:

  • Boy do process boundaries matter. You really need to pay attention to API docs to see which fields are nullable, and which are not. Once this tedious piece is done, the rest tends to fall into place. And you've got to hope the API docs aren't lying to you.
  • Coalescing with an empty value is sometimes OK, and sometimes isn't, depending on the conventions used in the program. Different programs written & maintained by different developers will have different conventions! Discovering what's at work, and how their property is used is essential. You could mark everything as nullable, but that negates the benefits of NRTs! So it's really worth taking the time on each and every property. I did A LOT of searching in the codebase just to be sure.
  • You can't blast through it. You have to be in the right frame of mind to really understand things, and to put up with the tedium.
  • Libs that you have control over that contain business logic are pretty easy to adapt. You specify what you want at the entry points, and then go back and adapt the consuming programs to supply that.
  • I found a lot of potential bugs. I say "potential" because errors weren't happening there, but they could have if assumptions or up/downstream code was changed and the foundational assumptions were violated.
  • I wound up thinking "null" as a concept isn't really bad, but it is dangerous, in part because there hasn't historically been a way to say "this value must not be null" that the compiler could check for ahead of time.
  • Libs in general are the hardest to adapt, because their repos exist outside runtime context unlike a service. Lots of places are unambiguous, but lots of places are ambiguous. Can this string be null in any of the four services that consume this? And then you've got to track that down. (Libs that do math or whatever don't have these constraints, or you arrive at a convention where this value is nullable for maximum cross compatibility with older versions, and then immediately have a guard clause.)

I started with the libraries. It was, uh, not easy going in many cases. Files with #nullable turned on were quite easy to adapt. But towards the middle, I starting ping-ponging between libs and services, kind of taking little bites at a time.

One big improvement was publishing both net6.0 and net8.0 binaries in the csproj -- that decouple service framework upgrade versions from lib versions, and it takes literally 3 seconds to do. Just making each shippable bit of work as small as possible was a big "aha" moment for work like this.
 

ShuggyCoUk

Ars Tribunus Angusticlavius
9,975
Subscriptor++
Very much agree with that.
That shows you why nullable (by default) is awful… you end up needing total program/system awareness (often through to database/other persistent stores) to know what the current semantics are let alone what they should be. This means doing a retrofit needs expertise and experience, rather than being a “turn the crank” give to anyone with a clue.
 

svdsinner

Ars Legatus Legionis
15,093
Subscriptor
Has anyone done multi-framework targeting in Visual Studio? I've got most of it figured out, except one thing: If my startup project is multi-targeted, how does is choose which framework to use if I hit "F5" to debug? I know how to set the framework if I build from the command line, but I can't figure out how to make that selection inside Visual Studio.
 
Has anyone done multi-framework targeting in Visual Studio? I've got most of it figured out, except one thing: If my startup project is multi-targeted, how does is choose which framework to use if I hit "F5" to debug? I know how to set the framework if I build from the command line, but I can't figure out how to make that selection inside Visual Studio.
The only good way I've found to do this is to have one folder with your code, then have several separate projects (one for each target framework) outside of it, and add all the source to each project as a link (Add Existing Item, find your code, click on the little arrow next to Add, and change to Add As Link). That way you pick target framework simply by changing your startup project.

Yes, it is somewhat hard to get used to, but it is by far the least janky solution I know of.

Note that you really, really, really, really need to have all the target framework projects in your CI build at the very least, and preferably in the work solution! If you don't, someone will start using new C# features and it won't break THEM because they're only using the project that targets .NET 8, and now your .NET Framework 4.X build falls down...
 

nimro

Ars Tribunus Militum
2,097
Subscriptor++
Has anyone done multi-framework targeting in Visual Studio? I've got most of it figured out, except one thing: If my startup project is multi-targeted, how does is choose which framework to use if I hit "F5" to debug? I know how to set the framework if I build from the command line, but I can't figure out how to make that selection inside Visual Studio.
If you aren't worried about syncing the setting over git etc, then just for your local machine you can change it from the dropdown next to the start debugging button:
1705052192723.png
 
  • Like
Reactions: hanser

svdsinner

Ars Legatus Legionis
15,093
Subscriptor
Didn't need to touch launchsettings.json, just having the two targets in <TargetFrameworks> in the csproj was enough. Can you tell us more about your project: what are you targeting, that sort of thing?
The projects in question are Windows Services (C#) projects.

Can I ask how you added the second target? I manually edited the .csproj file. I'm wondering if instead I had done it in some GUI inside Visual Studio that it would've automatically added it.

Also, is there a section in your launchsettings.json that looks like it is related to picking the framework? If so, could you post the relative bits?
 

nimro

Ars Tribunus Militum
2,097
Subscriptor++
The projects in question are Windows Services (C#) projects.

Can I ask how you added the second target? I manually edited the .csproj file. I'm wondering if instead I had done it in some GUI inside Visual Studio that it would've automatically added it.

Also, is there a section in your launchsettings.json that looks like it is related to picking the framework? If so, could you post the relative bits?

Is yours an SDK-style project file? If so, ensure you have the TargetFrameworks property with no singular TargetFramework element left over. That should be all you need to do.

If you're using the old-style projects, Visual Studio never really supported multi-targeting very well in those, I'm afraid my memories are very hazy and I've no examples to check on any more. You can use the upgrade assistant to merely convert the project file to SDK-style without touching the targeted frameworks.
 

svdsinner

Ars Legatus Legionis
15,093
Subscriptor
Just passing on something I discovered in .NET Core (all versions):

When you publish, a folder is generated called "runtimes" Inside it are several folders that look like the mean something regarding what they apply to. You might assume the "unix" folder would be specifically for the application running on a Unix platform and that it would not matter if you were running on Windows. You would be wrong. We just had a bug on a Windows server that was caused by the deployment script not updating one of the files in the runtimes\unix subtree. Updating the file under the unix subtree resolved the issue on Windows.

Lessons learned:
  1. There is no glory in making a small deployment package if your assumption on what you can remove from the deployment package is wrong.
  2. DO NOT assume the folder names in the "runtimes" folder mean what you think they do. Even in the windows subtree there are oddities like the "netcoreapp2.1" files still matter even if you are running .NET 7 or 8. Just make sure they are all deployed exactly how they are generate when publishing.
 

hanser

Ars Legatus Legionis
41,687
Subscriptor++
It's back to just ".NET" now. The only people calling it Core are the people still in pre-Core land. :p

That said, I think you can target specific architectures with dotnet build, and trim down the resultant output using Runtime IDs (RIDs):

I haven't tested it, but something like this for Windows running on x64

Code:
dotnet build -r win-x64 Some.Specific.csproj

This is kind of a neat RID tool, too:
 
  • Like
Reactions: ShuggyCoUk

svdsinner

Ars Legatus Legionis
15,093
Subscriptor
Has anybody looked into AI assistance to use with Visual Studio Professional/Enterprise 2022?
How is the out of the box coding assistance different from Github's Copilot or JetBrains AI? With the prices for AI assistants as low as it is, I'd like to get one, but I don't really have time to do a big comparison of all available options. If anyone has used any of the AI assistants inside VS Pro, can you post your experiences?