God I hate it so much. We shifted up a major version, and the IDE has less functionality than the previous version, to the point where I'll use the old one for reviewing/exploring source.Uniface?
God I hate it so much. We shifted up a major version, and the IDE has less functionality than the previous version, to the point where I'll use the old one for reviewing/exploring source.Uniface?
The last thing I did at my previous job was migrate a big legacy app from Uniface 9 to 10. I liked some things about the new IDE but it definitely felt unfinished.God I hate it so much. We shifted up a major version, and the IDE has less functionality than the previous version, to the point where I'll use the old one for reviewing/exploring source.
It is vendor-specific syntactic sugar. But at least it only grosses up your code in the parts that deal with sql. The rest of your code is just passed through as is.probably pro*c
namelist
format is not implemented as some sort of library, but rather access to such files is integrated directly into the read
and write
directives, that have a namelist-specific mode of operations activated by passing an optional named parameter nml
as an argument to the calls. I need to double check, but the recent error that led me to look into this makes me think that different Fortran compilers are inconsistent in how they implement it too, much like their conformance with the rest of the Fortran standard.It's not that it's harder in C to return in multiple places, all the underlying housekeeping is taken care of for you so there is nothing to "do" but type return wherever you want (has been that way in my experience for a very long time), it's just to prevent confusion. It's similar to the guidance of not using goto.
/tangent
A handful of times I've done this in C with a macro that checks what was opened/allocated and cleans up before returning, but it doesn't look professional, it just looks like I was trying to be cute.As mentioned previously, the problem is with any cleanup that you have to do... closing file handles, releasing memory properly, etc. If you return from multiple places, you have to make sure that you have cleanup in multiple places or else you'll leak resources. Fortunately, modern languages with various semantics help with this... 'with' statements, automatic destructor calling, etc. no matter where/when you 'return'.
#define RETURN(x) do {if (fd != NULL) fclose﹙fd﹚; if (ptr != NULL) free(ptr); return (x);} while (0)
VERY early on I did some cutesy things... had someone complain about it so I stopped. I'd write loops (in C) without a code block, using commas (statement separators) and only one semi (statement terminator), like this:A handful of times I've done this in C with a macro that checks what was opened/allocated and cleans up before returning, but it doesn't look professional, it just looks like I was trying to be cute.
for (int index = 0; index < somval; index++)
stmt,
stmt,
stmt;
That works great until someone finds a reason to release some of those resources early. The act of releasing the resource typically doesn't also zero out the handle to it, but just leaves it as a dangling reference. If you're very lucky, the code crashes immediately on the double-A handful of times I've done this in C with a macro that checks what was opened/allocated and cleans up before returning, but it doesn't look professional, it just looks like I was trying to be cute.
Code:#define RETURN(x) do {if (fd != NULL) fclose﹙fd﹚; if (ptr != NULL) free(ptr); return (x);} while (0)
free
or repeated fclose
. If you're very unlucky, something else has been allocated at the same address, it's supposed to outlive the function, and is now subject to use-after-free.std::experimental::scope_exit
. If you're writing C that doesn't need to be portable, there's the cleanup
attribute in GCC and ClangDBase II was by Ashton-Tate, I believe.Way back in something like 1987-ish I was using something called DBaseII (de-base-two) on MPM machines. I'm not sure if it was by Borland or someone else. It was rubbish. It was very much a database language, in fact too much so as the other things like control structures were very lacking indeed. And I recall it only had 26 variables available and no arrays. We often had to concatenate string variables into one to prevent us running out!!
Ooh, PostScript was weird to 16 year old me who was used to Basic, Pascal, and C. Having no mentorship or guidance didn't help.Next significant language was PostScript once we got a few laser printers
The Defense Dept. supported Hopper because "business" applications in the early 1960s were being coded in 2nd-generation assembly languages by programmers knowing no more than elementary algebra, who were in short supply. I'd done about a year of that myself, so learned COBOL in 1 week flat in 1969. Problems arose when such applications endured.
I think one reason that the old, extremely wordy code works so well is that at the time of its creation, the sheer effort of entry and the high (personal time, amongst other things) cost of errors meant people did a lot more rigorous work up-front, before even touching the keyboard.The problem being, those creaky wierd COBOL programs worked.
While it would have been pure misery pecking in code in such a verbose language on a Teletype, COBOL was extremely successful for its designed tasks. And that's why so many projects to "get rid of this old junk and replace it with something in a modern language" have failed that it's a meme all its own.
I think that's mostly survivor bias. If you spend enough time on computer folklore sites, you start seeing stories of software just as awful as anything produced today.I think one reason that the old, extremely wordy code works so well is that at the time of its creation, the sheer effort of entry and the high (personal time, amongst other things) cost of errors meant people did a lot more rigorous work up-front, before even touching the keyboard.
Ah, ok, I get it now, frame of reference. I'm coming from this from much smaller embedded C projects world. Embedded almost always using statically allocated memory, no memory to free, no files to close, you're just returning from a function call, possibly in an interrupt context or RTOS task context switch. C is still the dominant embedded programming language, I probably wouldn't use C with a resource rich environment.As mentioned previously, the problem is with any cleanup that you have to do... closing file handles, releasing memory properly, etc. If you return from multiple places, you have to make sure that you have cleanup in multiple places or else you'll leak resources. Fortunately, modern languages with various semantics help with this... 'with' statements, automatic destructor calling, etc. no matter where/when you 'return'.
Even in a static-resource environment, there might be temporary state changes that need to be reverted on function exit. Think interrupt masks; CPU modes, privilege levels or registers; locks, mutexes, semaphores, etc.Ah, ok, I get it now, frame of reference. I'm coming from this from much smaller embedded C projects world. Embedded almost always using statically allocated memory, no memory to free, no files to close, you're just returning from a function call, possibly in an interrupt context or RTOS task context switch. C is still the dominant embedded programming language, I probably wouldn't use C with a resource rich environment.
Ah, ok, I get it now, frame of reference. I'm coming from this from much smaller embedded C projects world. Embedded almost always using statically allocated memory, no memory to free, no files to close, you're just returning from a function call, possibly in an interrupt context or RTOS task context switch. C is still the dominant embedded programming language, I probably wouldn't use C with a resource rich environment.
Wikipedia says development started in 1958, published in a paper in 1960. Some of its key concepts are from Information Processing Language from a couple of years earlier.How many people know how old LISP really is?
For those who had previously only had assembly code as the highest level language available, the change ushered in by serious HLLs must have been nothing short of revolutionary - in terms of developing the conceptualisation of algorithms as much as in accelerating productivity of programmers.COBOL appeared in 1959, and that came from FLOW-MATIC which was of 1955 vintage.
The late 50s would have been such a time to be in computing!
Lisp is still used in AutoCAD and Emacs, and Scheme is used in Gnucash. The backend of Grammarly is written in Lisp. It seems that Lisp although not used by that many people is more spread out by use cases, so likely more known than Fortran. Clojure is a modern variant of Lisp.
I thought AutoLISP went the way of the dodo a few years back? ISTR much wailing and gnashing of teeth about it at the time. Although I could have dreamed that.Wikipedia says development started in 1958, published in a paper in 1960. Some of its key concepts are from Information Processing Language from a couple of years earlier.
In today's world it is hard to get your head around high level language programs being entered via punched cards.
The article suggests that Lisp is the second oldest HLL still in use after Fortran. I'm wondering what the usage pattern of each of these is nowadays. I've never come across Fortran other than mentions and brief descriptions in books, but am told it is still used in physics labs etc. Lisp is still used in AutoCAD and Emacs, and Scheme is used in Gnucash. The backend of Grammarly is written in Lisp. It seems that Lisp although not used by that many people is more spread out by use cases, so likely more known than Fortran. Clojure is a modern variant of Lisp. Fortran appears to just exist as newer revisions of the language - there has been no major reinterpretation of it as such. Maybe that is a sign that it did a good job already?
They replaced it with Visual Lisp - which for most people was pretty much the same thing. They added an editor with syntax highlightingI thought AutoLISP went the way of the dodo a few years back? ISTR much wailing and gnashing of teeth about it at the time. Although I could have dreamed that.
Besides direct usage in a lot of scientific and engineering computing (which I'm directly involved in), anyone using NumPy and SciPy is using Fortran code for a substantial number of the numerical kernels under the hood.I've never come across Fortran other than mentions and brief descriptions in books, but am told it is still used in physics labs etc.
Could be Autodesk's own marketing. They deprecated their IDE (in favour of VS Code), which makes sense as why develop an IDE when others out there are flexible enough to do what you need.Clearly I’ve had a bit of a ChatGPT moment here.