The .NET Thread: For all things .NET

hanser

Ars Legatus Legionis
41,687
Subscriptor++
To clarify, the main purpose of decoupling and using reflection is to give us the ability to just drag a .dll into a folder, and have it found and the jobs inside it scheduled. Decoupling gives us the ability to let a junior programmer or a non-programmer to make or maintain a scheduled task without requiring that they have any requirements beyond a compiler on their machine. (Envision someone in accounting who wants to keep control of their one weekly batch job. They can edit their 1-class .dll without letting them have any access to the rest of the code base. They issue a pull-request on that one project, we code review it and can drop it into the magic folder at anytime to implement their new version)
This is a dead end, and you shouldn't waste your time.

It also may violate your change management policies if you have any kind SOC or other certification. But mostly you shouldn't do it because it's a waste of time, and doomed to fail.
 

minnmass

Ars Tribunus Militum
2,080
Subscriptor
If you publish the DLL containing the interface/attribute definitions to an internal NuGet repository, then your job developers still don't need anything fancy installed.

Yeah, snark aside: if you're really sure you want any schlub who can con their supervisor into thinking they're a l33t haxor who can automate away the drudgery...

Sorry, snark mode off for real this time.

Here's what I'd do:

Set up a solution in a private git repo somewhere. Include the necessary boilerplate in the repo directly, even if it's a copied interface from the task runner solution.

Anybody (person/department/... - whatever makes sense in your situation) who wants to manage a job can clone the repo and work from there. They each get a namespace within the solution. They can then issue pull requests to main, with someone assigned to skim those PRs for obvious problems - don't worry about the logic of the task, just look for File.Delete("\\\\super_important_share\\"), changes that cross namespaces (which would require approval from both namespaces' owners), or things that'll bring down the job server (Exit(0)). Limit the number of people who have write access to main.

Set up (eg.) Jenkins to actually deploy the jobs. Jenkins can poll (or be notified by) branches, and can run some basic sanity checks automatically, too. It can also notify people when deploys are finished.

When the boilerplate needs to be updated, which will happen eventually, you can issue a PR to main. You'll know right away how big the cleanup will be by the number of red squiggles in the solution, and nobody will be able to not update their code to the new requirement: even if there isn't a merge conflict, the PR approval person would reject anything that tries to roll the change back.

Envision someone in accounting who wants to keep control of their one weekly batch job. They can edit their 1-class .dll without letting them have any access to the rest of the code base. They issue a pull-request on that one project
Junior programmers often need months of education to grok the basics of "real world" programming, including dealing with source control, pull requests, non-trivial source code, etc... It's exceedingly rare to find people outside of development/QA teams who can handle that stuff even halfway properly.
 

svdsinner

Ars Legatus Legionis
15,093
Subscriptor
If you publish the DLL containing the interface/attribute definitions to an internal NuGet repository, then your job developers still don't need anything fancy installed.
Nice idea. I completely forgot that VS Code has grown to be able to use nuget packages.

And, for the rest of you guys. :p It never disappoints how people love to point out how using a solution designed for a specific situation can go horribly wrong if blindly used in the wrong situation. :p The idea that being able to drop files into a folder and have them picked up and used by the scheduler would somehow involve an open file-share that anybody could mess with made me giggle. The funniest part is that I've seen places that I could envision doing something that stupid. And sadly, we all probably have seen places where we could envision that happening. D:
 

Lt_Storm

Ars Praefectus
16,294
Subscriptor++
If you publish the DLL containing the interface/attribute definitions to an internal NuGet repository, then your job developers still don't need anything fancy installed.
Nice idea. I completely forgot that VS Code has grown to be able to use nuget packages.

And, for the rest of you guys. :p It never disappoints how people love to point out how using a solution designed for a specific situation can go horribly wrong if blindly used in the wrong situation. :p The idea that being able to drop files into a folder and have them picked up and used by the scheduler would somehow involve an open file-share that anybody could mess with made me giggle. The funniest part is that I've seen places that I could envision doing something that stupid. And sadly, we all probably have seen places where we could envision that happening. D:

Honestly, I still think you are going to run into bigger idiot problems with this one. Really, you want someone to review the code that is going to be run just to make sure that it is sane because someone *will* want to do something insane and have no idea of what insanity they hath wrought.

Also, you might consider setting time limits for the tasks... that should deal with most of the insanity, at least, save for the rm -rf / sort.
 

minnmass

Ars Tribunus Militum
2,080
Subscriptor
Honestly, I still think you are going to run into bigger idiot problems with this one. Really, you want someone to review the code that is going to be run just to make sure that it is sane because someone *will* want to do something insane and have no idea of what insanity they hath wrought.

Heck: we see this with seasoned developers, especially with misuse of IEnumerables and .ToList().

... took way too long to figure out that IEnumerables and Dapper don't mix well (in the pathological case, requiring a full trip to the DB and new query execution for each row, if the actual query's wrapped in a using statement like it should be). It's a totally non-obvious (at least, at first-pass) interaction; I think we've needed to tell every new dev not to do something like

Code:
private async Task<IEnumerable<Foo>> Get() {
  using (var connection = GetConnection()) {
    return await connection.QueryAsync<Foo>(query, params);
  }
}

public async Task<Foo> GetBiggest() {
  return (await Get()).Max();
}

will require one trip to the DB for each Foo. The solution is to return an IReadOnlyCollection and .ToList() (or .ToArray() or whatever else) the results before returning them. Very few people catch the problem (IME) on the first pass over the code, and it does (usually) work, so... "tank" goes the performance.
 

Ardax

Ars Legatus Legionis
19,076
Subscriptor
The idea that being able to drop files into a folder and have them picked up and used by the scheduler would somehow involve an open file-share that anybody could mess with made me giggle. The funniest part is that I've seen places that I could envision doing something that stupid. And sadly, we all probably have seen places where we could envision that happening.
It's not just that, it's that they often don't start out so braindead, but someone with more political power than brains undermines things and turns your solution into nightmare fuel for The Daily WTF.

Edit @minnmass: Returning an IQueryable<> instead would fix the problem for Entity Framework. Will it help w/ Dapper?
 

minnmass

Ars Tribunus Militum
2,080
Subscriptor
Edit @minnmass: Returning an IQueryable<> instead would fix the problem for Entity Framework. Will it help w/ Dapper?

Honestly, not sure. To a rounding error, getting all of the Foos is desirable every time (especially since we're trying to separate "get data from the repo" from "business logic", largely successfully for newer stuff).

That said, I'd be surprised. the problem is that the using is disposed of before Get returns, so anything that interacts with its returned "thing" will (as I understand it and am able to communicate it) hit the IEnumerable state machine while its connection has been closed; the connection knows how to restart itself, and does so automatically, but the DB sees that the connection is closed so it forgets the result set. When the connection is re-established, the query needs to be re-run to get the "next" element, and the state machine helpfully re-disposes of the connection. Then, when you want to get the 3rd element...

It looks, from briefly skimming the MSDN entry, like IQueryable is more-or-less an IEnumerable with some extra bells; if so, it'd probably fall into the same problem of having to re-open the connection for each "get next" request).

If you're getting a dozen records from a simple query once an hour, "no problem" (depending on vagaries, changes to the underlying data may cause some weirdness). If you're getting potentially hundreds or thousands of records from a complicated query thousands of times a second, however...
 

Ardax

Ars Legatus Legionis
19,076
Subscriptor
It looks, from briefly skimming the MSDN entry, like IQueryable is more-or-less an IEnumerable with some extra bells; if so, it'd probably fall into the same problem of having to re-open the connection for each "get next" request).
Not exactly. IQueryable works with expression trees, and when the queryable is realized, the expression is sent to the backing query provider to be translated into the form best suited to the backing store. So for EF it'll take everything you've chained up so far -- including the .Max() from your sample, and (eventually) compile that to SQL, so you're not only executing a single query, it'd be an aggregate query returning your single max value.

I just don't know if Dapper is ORM "enough" to do that or if you need something heavier weight.
 

svdsinner

Ars Legatus Legionis
15,093
Subscriptor
Honestly, I still think you are going to run into bigger idiot problems with this one. Really, you want someone to review the code that is going to be run just to make sure that it is sane because someone *will* want to do something insane and have no idea of what insanity they hath wrought.
They issue a pull-request on that one project, we code review it and can drop it into the magic folder at anytime to implement their new version
It definitely something to only use in very specific situations. However, I've definitely had a handful of issues in the last few years where one person keeps needing ultra-basic changes to one small part of an app/website that we've successfully taught them how to make the changes and then let us review them before deploying them. When the person is right, and the situation is right, it is wonderful to get them creating and debugging their stuff to their liking before we have to review it.
 

Pont

Ars Legatus Legionis
25,788
Subscriptor
That is CancellationToken.None, and it should not be used as a default parameter value.

Code:
public async Task DoSomethingAsync(CancellationToken token = CancellationToken.None) { ... }
NEVER do that anywhere other than the top level public API. Even then, I think it's an anti-pattern.

The user can call DoSomethingAsync(CancellationToken.None) explicitly, but if you use it internally, you run a great risk of calling DoSomethingAsync() and having an operation that never times out (or at least has a ridiculously long default timeout like 30 seconds). And, because this is syntactic sugar that the language provides, the compiler won't even warn you.

Better pattern
Code:
public async Task DoSomethingAsync(CancellationToken? token = null)
{
   token = token ?? TokenFromSensibleTimeout();
   ...
}

Even that should not be used internally. To easy to end up using the default behavior when unintended and ignoring the proper cancellation from higher up.
 

Pont

Ars Legatus Legionis
25,788
Subscriptor
Mainly, you would only do that to create a linked CancellationTokenSource. e.g. When the user has passed in a token, but you want to make sure something takes no more than X milliseconds regardless of the token the user passed in.

Also, another common bug around CancellationTokens is assuming they're for timeouts. That is the most common use of CancellationTokens, but far from the only one. Parallel first-to-finish, app shutdown, parallel ops where an error means none of the other ops matter, etc.
 

Pont

Ars Legatus Legionis
25,788
Subscriptor
If you don’t want a timeout it’s fine. It’s fine to say you take the process out if need be at a much higher level rather than try to guess the callers intentions


Having no parameter for a cancellation token is fine, if you know you don't want any timeout or cancellation. It is specifically defaulting a method parameter to CancellationToken.None that is bad.
 

hanser

Ars Legatus Legionis
41,687
Subscriptor++
Mainly, you would only do that to create a linked CancellationTokenSource. e.g. When the user has passed in a token, but you want to make sure something takes no more than X milliseconds regardless of the token the user passed in.

Also, another common bug around CancellationTokens is assuming they're for timeouts. That is the most common use of CancellationTokens, but far from the only one. Parallel first-to-finish, app shutdown, parallel ops where an error means none of the other ops matter, etc.
I still think of that as an opportunity for a LinkedTokenSource. But yeah that’s a fair use case.

The idea of partitioning a set of sub-operations for performance reasons (“this part can only take 25ms”) is an interesting one.
 

Pont

Ars Legatus Legionis
25,788
Subscriptor
The idea of partitioning a set of sub-operations for performance reasons (“this part can only take 25ms”) is an interesting one.

Pretty common in longer timeout situations where you're chaining a bunch of operations.

"Each one of these steps should take no less than 2.5s, but the whole series should take no more than 30s."
 

hanser

Ars Legatus Legionis
41,687
Subscriptor++
Oh sure. We have latency budgets, and talk about them explicitly. But there's always squish in the layers, and if there's a little delay over here, chances are you can make up for it in the p99 case by the end of the whole operation.

Yeah some will get borked, but there's usually some slack in the system, allowing more to complete successfully than hard time-boxing each layer.
 

david_a

Ars Scholae Palatinae
819
Subscriptor++
Anybody using Rider on an Apple Silicon Mac?

My personal machines are getting pretty long in the tooth and I want to buy some new ones within the next year or so. For a laptop, it's pretty hard to pass up the performance/battery numbers for the Apple chips... However, I would want to do .NET-y things on the machine.

From what I can tell, .NET works now but I'm not sure if it works well.
 

shade1978

Ars Tribunus Militum
2,525
Subscriptor++
Anybody using Rider on an Apple Silicon Mac?

My personal machines are getting pretty long in the tooth and I want to buy some new ones within the next year or so. For a laptop, it's pretty hard to pass up the performance/battery numbers for the Apple chips... However, I would want to do .NET-y things on the machine.

From what I can tell, .NET works now but I'm not sure if it works well.
I've been using it with .NET 6 on an M1 MBP. Works great, very performant. I don't do professional work with it currently (it's my personal laptop) so I can't vouch for every library or every scenario.

You might have some pain trying to combine a Rosetta-based .NET 3.1 install with an ARM .NET 6... I seem to recall having some issues with it and I think the way I resolved it was deleting all of my .NET SDKs and only installing .NET 6.
 

david_a

Ars Scholae Palatinae
819
Subscriptor++
Anybody using Rider on an Apple Silicon Mac?

My personal machines are getting pretty long in the tooth and I want to buy some new ones within the next year or so. For a laptop, it's pretty hard to pass up the performance/battery numbers for the Apple chips... However, I would want to do .NET-y things on the machine.

From what I can tell, .NET works now but I'm not sure if it works well.
I've been using it with .NET 6 on an M1 MBP. Works great, very performant. I don't do professional work with it currently (it's my personal laptop) so I can't vouch for every library or every scenario.

You might have some pain trying to combine a Rosetta-based .NET 3.1 install with an ARM .NET 6... I seem to recall having some issues with it and I think the way I resolved it was deleting all of my .NET SDKs and only installing .NET 6.
Cool. This will be a personal machine so there’s not much need for legacy stuff.

I was poking around the GitHub issues and it sounds like .NET 7 will add a lot of low-level performance improvements for arm64, but it sounds like it’s already pretty good.
 

david_a

Ars Scholae Palatinae
819
Subscriptor++
I use it semi-professionally for Xamarin dev. Works great, apart from Xamarin Mobile Forms which suck and make me wish I had opted to use React Native.
lol yeah I was on a greenfield Xamarin Forms project for a year and I was extremely disappointed with it. We started a month or two before they announced Maui and it was pretty obvious the MS guys were sick of working on that trash code base too. Strong "please, please just deal with it until we release Maui" energy from them.

Has anybody seriously looked into Maui? Is it a significant improvement, or just more of the same?
 

Jonathon

Ars Legatus Legionis
16,541
Subscriptor
Has anybody seriously looked into Maui? Is it a significant improvement, or just more of the same?
Wondering this myself.

If you're just doing personal stuff, Rider ought to be fine. VS on Mac definitely seems like a second class citizen compared to Windows for our devs at work.
VS for Mac is an unrelated product that Microsoft has rebranded as Visual Studio-- it's a rebranded Xamarin Studio, which itself originated as a fork of MonoDevelop.

They've put a decent amount of work into it over the years and it's seen substantial improvements largely thanks to Microsoft's modularization efforts around the compiler and IntelliSense (Roslyn was huge for the quality of their editor, and that was actually a year or two before MS bought Xamarin). But it is largely not any kind of unified code base with Visual Studio for Windows and it's not MS's main target when they're building developer experiences-- so, yeah, definitely a second-class citizen compared to Windows. Maybe not even a second-class citizen now that VS Code's a thing and the .NET cross-platform command line tooling's a lot better (or existent at all).
 

david_a

Ars Scholae Palatinae
819
Subscriptor++
VS Mac supposedly got a lot of love in the 2022 release, but that was after I switched to working on another project so I don't have any first hand knowledge of it. Xamarin Studio was abysmal all around, the early VS Mac was a massive improvement but given the starting point it was still awful, and VS 2022 is an unknown to me. .NET 6 also brought official support for writing Forms stuff in VS Code - this used to be tied to a bunch of tools inside of VS Mac.

I think Maui is a fork of Xamarin Forms TBH... I might be wrong though.
It's basically Xamarin Forms 6, but they're breaking compatibility to fix a few things. Some of the fundamental paradigms are different.

My reading of the tea leaves is that they are intentionally killing off the Xamarin branding since it does not have a good connotation to anyone that actually used it for a serious project. Xamarin (when it was a separate company) was very small and spread way too thin for all the ambitious projects they had. Most of their stuff worked (more or less) in a happy path, but once you got off of that everything fell apart. I think they dug a very, very deep hole of technical debt and Microsoft has been feeling the pain trying to fill it in.

Maui seems like a bit of a reset. They feel confident enough in some of the improvements that they're willing to sever compatibility. All good on paper, but I haven't used it yet to know if it's actually substantially better.
 

Jonathon

Ars Legatus Legionis
16,541
Subscriptor
VS Mac supposedly got a lot of love in the 2022 release, but that was after I switched to working on another project so I don't have any first hand knowledge of it. Xamarin Studio was abysmal all around, the early VS Mac was a massive improvement but given the starting point it was still awful, and VS 2022 is an unknown to me. .NET 6 also brought official support for writing Forms stuff in VS Code - this used to be tied to a bunch of tools inside of VS Mac.

I think Maui is a fork of Xamarin Forms TBH... I might be wrong though.
It's basically Xamarin Forms 6, but they're breaking compatibility to fix a few things. Some of the fundamental paradigms are different.

My reading of the tea leaves is that they are intentionally killing off the Xamarin branding since it does not have a good connotation to anyone that actually used it for a serious project. Xamarin (when it was a separate company) was very small and spread way too thin for all the ambitious projects they had. Most of their stuff worked (more or less) in a happy path, but once you got off of that everything fell apart. I think they dug a very, very deep hole of technical debt and Microsoft has been feeling the pain trying to fill it in.

Maui seems like a bit of a reset. They feel confident enough in some of the improvements that they're willing to sever compatibility. All good on paper, but I haven't used it yet to know if it's actually substantially better.
I really liked Xamarin itself for the iOS app I used it for-- but we were using it just in a shared core/platform-specific frontend type of arrangement. The app was all C#, but the UI was all done using the UIKit bindings to C# rather than anything cross-platform. The tooling was occasionally buggy, but mostly worked as advertised, and I don't have any regrets for going with Xamarin over native Objective-C.

Xamarin Forms 1.0, however, came along fairly late in our project, and couldn't have built even our fairly simplistic UI at the time (app was basically a fancy questionnaire app). It's progressed some since then, but it's got some fundamental design flaws that hopefully Maui is finally addressing.
 

svdsinner

Ars Legatus Legionis
15,093
Subscriptor
My brain is blank. There is a pattern for serializing objects. The pattern is this:
  1. Serialize the object by overriding the .ToString() method
  2. Add a static .Parse(string data) method to the class that returns the deserialized object
What is the name of this pattern? Is it related to an attribute or an interface?
It is something that I used in older versions of .NET, and I'm frustrated that I can't remember the details.

Note: My memory might be foggy on the details. The pattern might have been to implement some kind of Serialize() method rather than overriding the .ToString() method.
 

Jonathon

Ars Legatus Legionis
16,541
Subscriptor
My brain is blank. There is a pattern for serializing objects. The pattern is this:
  1. Serialize the object by overriding the .ToString() method
  2. Add a static .Parse(string data) method to the class that returns the deserialized object
What is the name of this pattern? Is it related to an attribute or an interface?
It is something that I used in older versions of .NET, and I'm frustrated that I can't remember the details.

Note: My memory might be foggy on the details. The pattern might have been to implement some kind of Serialize() method rather than overriding the .ToString() method.
https://learn.microsoft.com/en-us/dotne ... ialization

Also ISerializable if you need more control over the serialization process (the custom serialization docs go over when and why you might use this, or some of the other serialization hooks).
 

Jehos

Ars Legatus Legionis
55,555
My brain is blank. There is a pattern for serializing objects. The pattern is this:
  1. Serialize the object by overriding the .ToString() method
  2. Add a static .Parse(string data) method to the class that returns the deserialized object
What is the name of this pattern? Is it related to an attribute or an interface?
It is something that I used in older versions of .NET, and I'm frustrated that I can't remember the details.

Note: My memory might be foggy on the details. The pattern might have been to implement some kind of Serialize() method rather than overriding the .ToString() method.
https://learn.microsoft.com/en-us/dotne ... ialization

Also ISerializable if you need more control over the serialization process (the custom serialization docs go over when and why you might use this, or some of the other serialization hooks).
This. Writing your own serializer is an antipattern at this point.
 

Jonathon

Ars Legatus Legionis
16,541
Subscriptor
My brain is blank. There is a pattern for serializing objects. The pattern is this:
  1. Serialize the object by overriding the .ToString() method
  2. Add a static .Parse(string data) method to the class that returns the deserialized object
What is the name of this pattern? Is it related to an attribute or an interface?
It is something that I used in older versions of .NET, and I'm frustrated that I can't remember the details.

Note: My memory might be foggy on the details. The pattern might have been to implement some kind of Serialize() method rather than overriding the .ToString() method.
https://learn.microsoft.com/en-us/dotne ... ialization

Also ISerializable if you need more control over the serialization process (the custom serialization docs go over when and why you might use this, or some of the other serialization hooks).
This. Writing your own serializer is an antipattern at this point.
Writing your own serializer has always been an antipattern. .NET [Serializable] and ISerializable go back to .NET (Framework, not Core) 1.1.

Although, personally, I'd rather see serialization to JSON (or Protobuf/GRPC) over the built-in binary or XML serialization these days (especially the binary formatter, which has security and compatibility caveats). Json.NET does this well, and you shouldn't be writing your own serializer there, either.

(I cross languages a lot, so I generally need to avoid wire and disk formats that tie data to a specific language or framework.)
 

ShuggyCoUk

Ars Tribunus Angusticlavius
9,975
Subscriptor++
I’ve done it. I know of no name for it.

The closest I know of is “round trippable formatting”

It scales badly and I would avoid using ToString() it’s too ambiguous an intent, at least implementing IFormattable and using the round trip format string “R” is _vaguely_ idiomatic, but really unless you are doing it for something akin to a primitive type it’s not a great structure. It nests badly.
 

hanser

Ars Legatus Legionis
41,687
Subscriptor++
Just use JSON. It’s right there.

I'm trying to remember the name of THAT pattern,
I honestly don't think it has a name, because it's always been wrong.
The only case where it’s not wrong is domain-specific RFC serializers. Like if you’re implementing a standard. I make use of this pattern in my slow rewrite of my iCal library which is RFC-5545.

It’s almost always the wrong thing to do, tho. System.Text.Json exists for a reason.