>unix philosophy of small programs running other programs in chain
>try to use it
>run curl from your software
>try to make multiple requests to same server using keep alive
>not possible, after each request curl process is closed and so are it's TCP connections
>cannot reuse the TCP connection, cannot use keep alive
>have to use library like real programmers instead of unix brain damage
>had to waste my time to learn that unix philosophy is retarded nigger shit
please be smarter than me and never go the unix way
>unix way sucks lol
Name a superior alternative.
If it's Windows then you automatically disqualify yourself as a total moron, which you may already be anyway.
>I have problems
Then clearly you have not learned the Unix way.
>inb4 this is a bait/troll thread
Likely true, I will sage.
>>8880 The unix philosophy says one should write small c programs to handle tasks with large asymptotic complexity. Then use shell to connect them together in ways that have low asymptotic complexity.
If you are making a large number of calls to curl, you are misapplying the unix philosophy: learn how to invoke curl with a list of urls, and it will automatically use keep-alive for its connections. Alternatively, you are only invoking curl a small number of times, at which point who gives a fuck about keep-alive?
>>8882 >Name a superior alternative.
Windows, MacOS, AmigaOS, ReactOS
>If it's Windows then you automatically disqualify yourself as a total moron
No, u
>>8903 >If you are making a large number of calls to curl
even two calls won't work with keep-alive
>learn how to invoke curl with a list of urls, and it will automatically use keep-alive for its connections.
even if that was possible, you wouldn't be able to change parameters or post things between each request
>Alternatively, you are only invoking curl a small number of times, at which point who gives a fuck about keep-alive?
all web browsers use keep-alive. if your software doesn't, it will stick out like a nigger in Japan
>>8904 >using keep alive isn't even the unix way, the unix way is to run a cron job
do you even know what you are talking about? keep-alive is so that when web browser makes many requests to same server in short time, it doesn't close TCP connection with each request, but disconnects after some period of inactivity
>>8913 unix/GNU/linux/FSF are the same, shit thing, I don't care how are they called
>>8903 >c
The unix philosophy doesn't specify a language
>>8940 >even if that was possible, you wouldn't be able to change parameters or post things between each request
Yeah that's when you use libraries. Just like on any platform.
>>8975 >all web browsers use keep-alive. if your software doesn't
curl is obviously designed for one off requests.
So the strawman challenge is multiple http requests with keep-alive on the command line without using a real programming language?
Answer is /dev/tcp.
>unix/GNU/linux/FSF are the same, shit thing, I don't care how are they called
How is it possible to be this ignorant and still posting on a /g/ board.
>>8975 >Yeah that's when you use libraries. Just like on any platform.
no, because unix philosophy tells you to small tools calling other tools calling other tools. philosophy that turns out to be shit if you apply it
>>8978 >So the strawman challenge is multiple http requests with keep-alive on the command line without using a real programming language?
no. nobody said "without using a real programming language"
>How is it possible to be this ignorant and still posting on a /g/ board.
linux snowlake like you thinking entire world gives a fuck about their shit 1% market share OS and how is it called
>>9003 >no, because unix philosophy tells you to small tools calling other tools calling other tools.
It tells you to write small programs that do one thing and do it well. It also recommends allowing for iteraction through text streams. It doesn't say "don't use libraries" you moron.
Now, if you're writing a program in c or any other low level language, there's practically no difference in terms of easiness between using a library and calling a binary, while there is a performance trade off if you go for the latter.
If however you are using a high level, especially scripting, language, calling a binary usually becomes trivial, you don't need to write bindings or whatever, and moreover the binary's options are the same no matter what language you are using. So by calling a binary you save yourself quite a bit of time. If you need to do some specific thing that the binary's interface doesn't allow for, then well, go write/find and study bindings, or find a language that has them already.
And frankly, most people don't need to alter request parameters between requests that need keep alive in between really bad.
So this "philosophy" saves people's time, how's that a bad thing?
>>9021 >And frankly, most people don't need to alter request parameters between requests that need keep alive in between really bad.
login into page and upload CP. or any interaction requiring POST method
>>9021 differentfag here
>>no, because unix philosophy tells you to small tools calling other tools calling other tools.
>It tells you to write small programs that do one thing and do it well.
The UNIX way means writing a CLI tool in the most insecure script-kiddie saurated language in existence (C), and then making a bunch of bullshit to parse input from the user, and then "composing" such programs by abusing them to act as libraries instead of user-facing CLI tools. UNIX only does what you say ostensibly. I can shit out an OS in SML or LISP in a day and it will be better in every way.
>It also recommends allowing for iteraction through text streams.
Which is garbage and may as well be binary streams.
>>8978 >How is it possible to be this ignorant and still posting on a /g/ board.
This is /tech/, faggot. /g/ are retard shitters like pic related
>The UNIX way means writing a CLI tool in the most insecure script-kiddie saturated language in existence (C)
C is not a scripting language. How could using it make you a script kiddie? It's only insecure if you decide to use it wrong, and designing it in a way that would totally prevent you from ever using it in a way deemed to "always be wrong" would end up being a problem in its own right.
>I can shit out an OS in SML or LISP in a day and it will be better in every way.
Do it. Go do it right now and come back to show us. Stop bullshitting about how LISP is a superior language to C, and prove it by providing good LISP software.
>>9179 >C is not a scripting language
It was for LISP machines.
>It's only insecure if you decide to use it wrong
No, C makes it easy to use it wrong without deciding to do so. There is no way to check that you did something right. Other languages have checkers for memory safety. Not C.
>Make your own OS
Making a full operating system is a lot of work. Perhaps given $100k and 6 months I could put together a team who could make an extremely secure, yet performant operating system. It would be in the next generation of operating systems after UNIX.
>C makes it easy to use it wrong without deciding to do so.
Things like undefined behavior make the language easier to implement and more portable. If you follow only what the standard specifies it is much harder to get wrong than you exaggerators would have non programmers believe.
>There is no way to check that you did something right. Other languages have checkers for memory safety. Not C.
Static analysis tools are better now than they have ever been. If you're that bothered by memory insafety being possible in C, use C++.
>>9168 >c
The unix philosophy doesn't specify a language
>>9179 >if you decide to use it wrong
What a weird way to say "if you make any mistakes".
>>9182 >If you're that bothered by memory insafety being possible in C, use C++.
This sentence shows your very deep understanding of this matter.
>>9179 >The UNIX way means writing a CLI tool in the most insecure script-kiddie saturated language in existence (C)
C is not a scripting language. How could using it make you a script kiddie? It's only insecure if you decide to use it wrong,
i said C is script-kiddie ridden, not that it makes you into a script kidde
>and designing it in a way that would totally prevent you from ever using it in a way deemed to "always be wrong" would end up being a problem in its own right.
C is bad from a PL design perspective. having the ability to do pointers and shit has nothing to do with it. it's undefined behavior for a program to have @ anywhere in the source code [1]. THAT is the problem. also i dont agree with using C to implement some trivial low performance shit like a chat program. i use C/ASM myself, and only when it's needed, for image processing code etc
>Do it. Go do it right now and come back to show us. Stop bullshitting about how LISP is a superior language to C, and prove it by providing good LISP software.
>Stop bullshitting about how LISP is a superior language to C
it is, but where did i say that? even Java is better than C. we could add pointers and machine types to java and add a hack to avoid GC and it would already be better than C. anyway my point was against saying how bad UNIX is, not C by itself.
>>9182 >Things like undefined behavior make the language easier to implement and more portable.
false. see [1]
>If you follow only what the standard specifies it is much harder to get wrong than you exaggerators would have non programmers believe.
i bet you don't even signal-safety (7). do your signal handlers only modify variables of the type volatile sig_atomic_t?
>If you're that bothered by memory insafety being possible in C, use C++.
nigga iz u trollin?
>>9186 >c
>>The unix philosophy doesn't specify a language
well in practice every UNIX is just the same shit as Linux, and now that UNIX is dead it will be true in theory true
>>9213 >see [1]
where?
>>Things like undefined behavior make the language easier to implement and more portable.
This is plainly true. Not defining the byte order, eg, is solely justified by portability.
>>9214 most UB in C is unnecessary and a result of parts of the language they forgot to specify (because C was not meant to be used outside some dudes' side project in the first place).
for example having an @ character in your source is UB. for example retarded syntactic ambiguities are UB
>>9218 it's UB, dumbfuck. all UB has the potential to become a vuln. the fact that you don't understand the significance of UB (copying and pasting the classic "it makes it easier to implement the compiler" point missing meme doesn't help) makes it seem like YOU don't seem important.
here you go you stupid nigfuck. had this in my queue of stuff to post.
even UNIXcrap itself admits its crap
/bin/mkdir -p '/var/tmp/portage/dev-libs/gmp-6.1.2-r1/image//usr/lib64'
/bin/mkdir -p '/var/tmp/portage/dev-libs/gmp-6.1.2-r1/image//usr/include'
/bin/mkdir -p '/var/tmp/portage/dev-libs/gmp-6.1.2-r1/image//usr/include'
/bin/sh ./libtool --mode=install /var/tmp/portage/._portage_reinstall_.5yxjcrqh/bin/ebuild-helpers/xattr/install -c libgmp.la libgmpxx.la '/var/tmp/portage/dev-libs/gmp-6.1.2-r1/image//usr/lib64'
/var/tmp/portage/._portage_reinstall_.5yxjcrqh/bin/ebuild-helpers/xattr/install -c -m 644 /var/tmp/portage/dev-libs/gmp-6.1.2-r1/work/gmp-6.1.2/gmpxx.h '/var/tmp/portage/dev-libs/gmp-6.1.2-r1/image//usr/include'
/var/tmp/portage/._portage_reinstall_.5yxjcrqh/bin/ebuild-helpers/xattr/install -c -m 644 gmp.h '/var/tmp/portage/dev-libs/gmp-6.1.2-r1/image//usr/include'
make install-data-hook
make[4]: Entering directory '/var/tmp/portage/dev-libs/gmp-6.1.2-r1/work/gmp-6.1.2-abi_x86_64.amd64'
+-------------------------------------------------------------+
| CAUTION: |
| |
| If you have not already run "make check", then we strongly |
| recommend you do so. |
| |
| GMP has been carefully tested by its authors, but compilers |
| are all too often released with serious bugs. GMP tends to |
| explore interesting corners in compilers and has hit bugs |
| on quite a few occasions. |
| |
+-------------------------------------------------------------+
>>9222 It's much better to ask people to run tests as opposed to just having people assume that it will work on their machine. The bug might even be in the user's hardware instead of the software.
>>8940 >you wouldn't be able to change parameters or post things between each request
You have obtuse needs. Unix philosophy says write a composable reusable c program to cover the general case, then wrap a shell script around it for the particular case. Unless you never expect to need to emulate browsers again, at which point write a bespoke tool.
>>8975 >The unix philosophy doesn't specify a language
This is true, I was simplifying. Obviously for more mathematical code you should drop down to fortran, and for really low level stuff you should prefer assembly. The general point still stands.
>>9222 >how dare software have bugs in it sometimes
<the absolute state of unix haters
>>9219 I have never once seen any C code with a @ in it.
I have seen more code written in "safe" languages subject to vulnerabilities from logical errors than I have seen of your example, so your example is worth shit. What was your point again?
>>9231 It also says
'A universal character name shall not specify a character whose short identifier is less than 00A0 other than 0024 ($), 0040 (@), or 0060 ('), nor one in the range D800 through DFFF inclusive.
>This is the hill you're going to die on? "C should never has @ in it anyways"?
Yes. I can't think of a single naming convention that uses the @ character.
>>8880 >run curl from your software
>try to make multiple requests to same server using keep alive
>not possible, after each request curl process is closed and so are it's TCP connections
Connect through a local http proxy like polipo and the proxy will keep the connection alive to the server even if you call curl multiple times.
>>9065 >login into page and upload CP. or any interaction requiring POST method
People have been POSTing long before keep alive. Keep alive is not even a mandatory feature of HTTP. The mechanism that lets you login and then access restricted pages is cookies and curl obviously has support for persisting cookies between calls.
sage because it's not clear if you're trolling or genuinely this ignorant but pointless thread either way.
>>9251 >Connect through a local http proxy like polipo and the proxy will keep the connection alive to the server even if you call curl multiple times.
this might work but it's a complication, I should remember to use polipo and tell every user to do it? it's solving the problem with tape
>People have been POSTing long before keep alive. Keep alive is not even a mandatory feature of HTTP. The mechanism that lets you login and then access restricted pages is cookies and curl obviously has support for persisting cookies between calls.
keep alive is not mandatory for anything, but it will make your software looking different from a web browser. the web server might deny you or track you
>>9262 >this might work but it's a complication, I should remember to use polipo and tell every user to do it?
Polipo runs as a daemon so once it's added to your init system it will always be waiting in the background.
Then you export the http_proxy environmental variable and all your curl requests will go through the proxy.
So no you don't have to "remember" to use it, you just need to set it up once.
>it's solving the problem with tape
The concept of daemons is core to the Unix philosophy. The reason this solution seems strange to you is because you don't seem to know much about Unix in the first place.
>keep alive is not mandatory for anything, but it will make your software looking different from a web browser.
Your software will always look different from a web browser when it's not a fucking web browser. Use Selenium or Marionette to control a real web browser if that's the actual problem you're trying to solve.
>>9394 >Use Selenium or Marionette to control a real web browser if that's the actual problem you're trying to solve.
if you do that your software will be vulnerable to browser exploits, tracking and other shits
>>9250 are you retarded? GNU is crap but there are far worse things out there in the UNIX world.
and GNU IS UNIX. it's fucking indistinguishable from UNIX shit. you probably also think plan9 isn't UNIX. from the perspective of someone who doesn't want to fuck with UNIX-like systems, I couldn't give less of a fuck whether something matches some pedantic definition of UNIX. Here, ill give you a quick definition of SOME of the bullshit i dont want
>"terminal emulators", metacharacters, TUIs
>environmental variables
>"""build systems"""
>name clashes
>3 levels of escaping
>"scripting" languages
>mime, and other pseudo-webshit
>string injection vulns
>memory safety vulns
>mismatched deps vulns
>character sets
>files and folders
>login screens
>ABI, shared libraries, static libraries
>"permissions" that don't even work
>^C^C^W^\^C^X
>hostnames, domain names, subdomains, DNS, mx records, "secure" DNS solution du jour, IP addresses, mac addresses
and yes, i consider windows and mac the same bullshit as UNIX (missing some unfortunate "features" like metacharacters but making up in their own retarded ways)
>>9228 >I have never once seen any C code with a @ in it.
how about in a fucking comment? like when the dev feels the need to put his email address (or worse, a block of license text) at the top of the C file
>I have seen more code written in "safe" languages subject to vulnerabilities from logical errors than I have seen of your example, What was your point again?
Yes, there could be 0 and my point would still be correct. Reread the thread. If you still don't know what the point is, you probably are missing some basic software engineering concepts.
>>9227 >oh our compilers break every day from the latest GMP version
>no big deal, this has no implications
>>9223 do you even know what you are trying to say? some ad hoc non exhaustive set of tests fixes nothing, and this shouldn't have been an issue in the first place. portable code should actually be portable, more than 50% of the time.
also future proof, more than 5% of the time. and before you freak out here, I'm talking future proof while still only being concerend with the definition of the language. for example C code that broke after moving to x64 was because retards assume the size of the int types and shit like that, despite the language spec saying exactly what int is. it's also funny that half the "red pilled" C developers call people pajeet, while not being able to grasp simple concepts like this. like in 99% of internet "debates", there will be some redpilled C kid defending C while not even knowing that an int can be a different size than 32 bits depending on the environment
>>9499 >for example C code that broke after moving to x64 was because retards assume the size of the int types and shit like that, despite the language spec saying exactly what int is.
C spec is brain damaged like unix niggers
it is fully retarded to define that int is different size on different architectures
it doesn't give any benefits, it just makes your code non portable
>>8880 I think one of the best ways to see the uses of the unix philosophy is in something like document creation. Let's look at three main methods; one monolithic, one semi-modular, and one that follows the Unix philosophy:
>LibreOffice Write
I'm using this over Microsoft Word just so nobody claims that I'm a biased freetard. The "standard" way of document creation. This is a monolithic, horrible program. It is slow, bloated, managing references is a pain, and the formatting is fine until it suddenly isn't: usually in the middle of a long, important paper or book with numerous graphs. It's easy to learn, but shit to master. If (read when) something breaks, you're toast.
>LaTeX
Standard professional typesetting software. I call it semi-modular because of how packages are handled, even though LaTeX itself is also kind of monolithic. You can do anything here, but the syntax sucks and it takes an annoyingly long time to compile. It also has little quirks, such as in the need to compile two or sometimes three times when adding a new reference. It's good, but a bit of a headache. When something breaks, enjoy weird error messages and diving through Stack Overflow.
>*roff (groff, nroff, troff, etc)
Unix philosophy typesetting. Old as fuck. Barely anybody uses it anymore. However, it's perfectly functional (especially with the mom macro set), it's fast, and each component is small, simple, and therefore has less of a chance of breakage. The syntax isn't great, but it's a lot better than LaTeX. When something breaks, there's usually a simple error message explaining the problem and it's trivial to fix. You also won't have to dive through stack overflow because documentation is also minimal (unfortunately).
Although this shows some peculiarities of document creation, I think it ultimately gives us a truth about why the Unix philosophy can be useful. It doesn't always win in software, and it's not even always the best (LaTeX, for all its flaws, is better when writing something very long and complex), but it does provide a good enough solution for most use cases and does it in a way that makes troubleshooting when things do go wrong fairly easy.
>>9508 >Your code would be portable if YOU weren't such a retard.
>Use stdint.h if you need your integers to be specific sizes.
why would anybody not need integers of specific sizes? stdint.h shouldn't be separate, optional shit, it should be built into language instead of having random int sizes
C niles are brain damaged and so is their standard
>>9509 The only sane document creation is when you type text, then select it with mouse and click on buttons to format it. all this latex bullshit is unix brain damage
>requiring from journalist or book writer to learn some nigger programming or magic code so he can make bold text
only linux niggers are that dumb
>>9524 >why would anybody not need integers of specific sizes?
Performance. The point of int, short, long etc. is that they are efficiently mapped to the word size of whatever architecture the code is complied for.
>The only sane document creation is when you type text, then select it with mouse and click on buttons to format it. all this latex bullshit is unix brain damage
You can't diff or version control binary and xml document formats. Your "point and click" approach is fine for baby's first document but not for serious and collaborative work.
>>9533 >You can't diff or version control binary and xml document formats.
Yes you can, you fucking retard. And it's probably better to diff ASTs than some retarded text.
>Your "point and click" approach is fine for baby's first document but not for serious and collaborative work.
<look at me i am a big boy i have big boy things to do, unlike thou
literally the same argument corporate dick suckers use to justify having unpatched vulns in their software
>>9529 >Anybody trying to port a C compiler across computer architectures.
I don't see how fixed size ints would stop this. a C compiler is something so low level that it shouldn't be expected that you take 32 bit linux compiler code and try to compile it to some 8 bit microcontroller without any code changes
>>9533 >Performance. The point of int, short, long etc. is that they are efficiently mapped to the word size of whatever architecture the code is complied for.
if programmer uses int because he wants to store number 1000000, then stupid C compiler makes this variable 16 bit when compiling on 16 bit architecture, you won't get higher performance, you will get broken software
if something opposite was being made, for example if you intended to have 16 bits variable, but compiler behind your back makes it 32 bits for performance. this is acceptable*, but this could be done even if there was no "int" type but only int16, int32 etc. you could type int16 in C, but it would only guarantee that variable will be at least 16 bits (could be bigger if compiler decides so)
but C niles and their nigger committe are too brain damaged for that
* but still risky as programmer might rely on boundary of types, for example he adds 1 to unsigned byte of 255 to get 0 (not 256)
>>9533 >You can't diff or version control binary and xml document formats. Your "point and click" approach is fine for baby's first document but not for serious and collaborative work.
I see zero reasons why diff or version control couldn't be implemented for binary format of point and click program
you probably meant that your shitty unix tools and git doesn't support them
>>9563 >but compiler behind your back makes it 32 bits for performance. this is acceptable*,
It's not even an edge case, any code that assumes a smaller int size than what the compiler generate is very prone to breaking. another example is storing an int and expecting it to take 4 bytes
>>9565 programmer shouldn't give a fuck what compiler generates
he has language and language specification
C specification should be that there are types int8 int16, int32 etc
if programmer types int16 he should be guaranteed that it will be at least 16 bit (or maybe exactly 16 bit?) and that it will be able to store number 8888
he shouldn't give a fuck about compilers, assembly, architectures. his code should work on all architectures that have C compiler
stdint.h is just a solution for C & unix brain damage. a solution that wouldn't be needed if they weren't brain damaged in the first place
>>9566 they have all that shit retard
>stdint.h is just a solution for C & unix brain damage
surprise: c cares about backwards compatibility. Call that "braindamage" all you like
>>9563 >if programmer uses int because he wants to store number 1000000...
Empirically most ints are small.
If your for loop is counting to 10 then you don't want to use uint8_t because then the compiler will need to waste instructions masking the larger registers down to 8 bits everytime the value is used.
In the 1000000 case, whoever is porting the software needs to pay attention to the compiler warning.
>you could type int16 in C, but it would only guarantee that variable will be at least 16 bits
See above. If what the developer thinks is 2 bytes is really 4 bytes then the compiler needs to mask out the upper 2 bytes everytime it's used otherwise the garbage in the upper 2 bytes will break the code.
>C niles and their nigger committe are too brain damaged for that
Ree'ing about how a language designed for speed and portability on 50 year old processors doesn't conform to modern language aesthetics is pretty retarded even as a troll.
>I see zero reasons why diff or version control couldn't be implemented for binary format of point and click program
Everytime you add or remove features from the binary format you would need to update the diff tool. You won't be able to use an old diff tool with newer binary formats. If the format diverges then you need different diff tools for different flavors of the format and they'll quickly be incompatible.
The only reason you would want that is if you are trying to lock users into a proprietary format.
If your goal is the get shit done then ASCII markup with a simple text diffing tool is the way to go.
>>9571 >Empirically most ints are small.
Empirically most executions of a program do not error. The rare cases that do error can potentially have disastrous side effects. For example bitcoin had a bug where you could send transactions whose outputs when summed overflowed allowing you to pay almost nothing to create those UTXOs.
>you don't want to use uint8_t
Since you care so much about speed, you should be using uint_fast8_t instead.
> The compiler will need to waste instructions masking the larger registers down
This is not required. The compiler is free to store a uint8_t as 48 padding bits and 8 value bits for example.
>In the 1000000 case, whoever is porting the software needs to pay attention to the compiler warning.
The compiler does not warn in every case there is an error due to an assumption of int's size. I would not be surprised if doing so was undecidable from Rice's theorem.
>the compiler needs to mask out the upper 2 bytes everytime it's used otherwise the garbage in the upper 2 bytes
As mentioned above you can use the first two bytes as padding bytes.
>will break the code
Most operations will work fine and not require anything special. Off the top of my head you just need to worry about are division and right shifts.
>a language designed for speed and portability on 50 year old processors
If it was designed for portability assumptions about widths of integers being used would be explicit and would have originally contained C99 types like uint_fast8_t.
>Everytime you add or remove features from the binary format you would need to update the diff tool
No, this is not required if we are just trying to compete with UNIX tier diffs.
>The only reason you would want that is if you are trying to lock users into a proprietary format.
<if it is not plain text, then it is proprietary
What a UNIX weenie.
>If your goal is the get shit done then ASCII markup with a simple text diffing tool is the way to go.
If you want get actual work done with text based programming you would be using a diffing tool that computes the difference between the ASTs of both files. Making diffs based off partitions made by new line bytes is a joke.
>>9586 >Since you care so much about speed, you should be using uint_fast8_t instead.
You still seem confused
>why does C do thing X?
<because 50 years ago....
>but that's not important in 2019 herpa derpa
>If it was designed for portability assumptions about widths of integers being used would be explicit
Except 8 bit bytes weren't the standard yet. The pdp-7 was an 18 bit architecture for example. If the choice is between forcing an arbitrary byte size onto all architectures or leave byte size implementation defined, the latter option obviously wins at both performance and portability.
>if it is not plain text, then it is proprietary
Nobody said that. If a format requires complicated parsing and every small update completely breaks 3rd party tools then it's only good for locking users into a proprietary ecosystem (e.g. PDF and Microsoft Office).
If you don't care about locking users into a proprietary ecosystem then just use a lowest common denominator like ASCII and save yourself the trouble of constantly rewriting basic tools like diff.
>If you want get actual work done with text based programming you would be using a diffing tool that computes the difference between the ASTs of both files.
So you checkin source code to your VCS but when there's a conflict you need to extract ASTs from the compiler and compare those instead? That's a good idea.
>>9566 >>stdint.h is just a solution for C & unix brain damage
>surprise: c cares about backwards compatibility.
you'd have to be braindamaged to actually believe he doesn't know that, given that his post implies it. and you haven't countered his actual argument which is that C should have just had fixed size ints from the get go. funny that i only ever see C programmers making such moronically illogical arguments
>>9571 >In the 1000000 case, whoever is porting the software needs to pay attention to the compiler warning.
C programmers need to pay attention to much more than just figuring out their ints are too small at the last second before release.
>Everytime you add or remove features from the binary format you would need to update the diff tool. You won't be able to use an old diff tool with newer binary formats. If the format diverges then you need different diff tools for different flavors of the format and they'll quickly be incompatible.
I mean for starters, you could have generic structures (such as an untyped tree) that get diffed, which text is just an example of after all.
>The only reason you would want that is if you are trying to lock users into a proprietary format.
lmao
>>9586 >The compiler does not warn in every case there is an error due to an assumption of int's size. I would not be surprised if doing so was undecidable from Rice's theorem.
It's impossible to cover all cases obviously.
>>9587 > If a format requires complicated parsing and every small update completely breaks 3rd party tools then it's only good for locking users into a proprietary ecosystem (e.g. PDF and Microsoft Office).
ASTs dont require parsing, since they are already an AST. yes you have to serialize it at some point, which is where you use a prefix code, which is more efficient and easier to implement than any text parser crap
>If you don't care about locking users into a proprietary ecosystem then just use a lowest common denominator like ASCII and save yourself the trouble of constantly rewriting basic tools like diff.
Everyone uses latin-1^Wutf-8 now. your text as the absolute primitive doesn't exist
>So you checkin source code to your VCS but when there's a conflict you need to extract ASTs from the compiler and compare those instead?
>extract ASTs from the compiler
not even a surprising answer from a Cnigger
You still seem confused
>>9605 >C should have just had fixed size ints from the get go
Why would I even bother arguing such a thing? Maybe it should have, although the other anon has explained why it didn't. We can't change the past, only the future.
>funny that i only ever see C programmers making such moronically illogical arguments
Python programmers would probably have similar arguments (python2->3). Javascript has had so many idiotic misfeatures over the years, I'm sure webshits would make similar arguments. Actually, every language in common use has experienced large evolutions over the years. The only moronic thing is judging a language based on how it was.
>Except 8 bit bytes weren't the standard yet.
That does not matter though. Just add in your alternate sized data types.
>So you checkin source code to your VCS but when there's a conflict you need to extract ASTs from the compiler and compare those instead? That's a good idea.
No, you always just check in the AST itself.
>>9613 >That does not matter though. Just add in your alternate sized data types.
If you need to use uint8_t for one architecture and uint6_t for a different architecture than it's not a portable language anymore.
Just using int for both and letting the compiler figure it out is the reason C managed to outlive so many CPU architectures.
>No, you always just check in the AST itself.
You wrote your code in a high level language and the code in the repo is in a high level language but to diff them you're going to compare the Abstract Syntax Trees instead. I'm going to go out on a limb here an say you've never actually tried that, kid.
>inb4 lisp has no syntax it already is an AST im obviously talkking about lisp you braindead cigger faggot herpa derpa
>>9615 (i am >>9605, not >>9613)
i have typed functional language with no text syntax. the editor works directly on the AST. it works perfectly, eschews the need to learn about parsing bullshit (where serialization is needed prefix codes are used, which are easier to implement, and more efficient in time and space), has no ambiguities, and takes 10 minutes to learn. i havent wrote a diff tool for it yet but it will be a simple exercise. there will still be ambiguity but it wont be near as much as a clusterfuck as mainstream PLs. also since code/types in my language are content addressable, even renaming things wont make a diff
>>9615 >If you need to use uint8_t for one architecture and uint6_t for a different architecture than it's not a portable language anymore.
If the person needs 8 bits to do something, then you will not be able to do with only 6 bits. There will be a bug introduced by using only 6 bits of precision. It is better for it to not compile than for it to compile with new errors.
>You wrote your code in a high level language and the code in the repo is in a high level language but to diff them you're going to compare the Abstract Syntax Trees instead.
See >>9617 you have the editor prettify the AST that is being stored on your machine / the repo.
LMAO guys
>https://en.wikipedia.org/wiki/C_data_types >short int
<Short signed integer type. Capable of containing at least the [−32,767, +32,767] range;[3][note 1] thus, it is at least 16 bits in size
>unsigned short int
<Short unsigned integer type. Contains at least the [0, 65,535] range;[3]
>int
<Basic signed integer type. Capable of containing at least the [−32,767, +32,767] range;[3][note 1] thus, it is at least 16 bits in size.
>unsigned int
<Basic unsigned integer type. Contains at least the [0, 65,535] range;[3]
>long int
<Long signed integer type. Capable of containing at least the [−2,147,483,647, +2,147,483,647] range;[3][note 1] thus, it is at least 32 bits in size.
>unsigned long int
<Long unsigned integer type. Capable of containing at least the [0, 4,294,967,295] range;[3]
So, there IS in fact the spec for this shit. You just shouldn't assume your ints are larger than this spec says if you want to port it.
Also my ganoo compiler says
<overflow in implicit constant conversion
When try to feed it the bad constant values. It doesn't give it for int being larger than 32,767 (but less than something out of 4-byte bonds) though, but I suppose it's because the program is being compiled by this particular compiler for this particular architecture and that shit got me covered.
That being sad, it's a really OLD woe about C "portability". Your code is not inherently portable if you make wrong assumptions, alas. In fact, your code in C is not "inherently portable" at all because you can write stupid shit like assuming endianness and whatnot. That doesn't make it so that "hurr your language is nonportable garbage cniles btfo". You have to do better than that.
<me thinken about rewriting all programs that might care about it to "long int" now
>>9615 >If you need to use uint8_t for one architecture and uint6_t for a different architecture than it's not a portable language anymore.
why not? we can have bool, 8, 16, 32 bit values even on 64 bit operating systems
if you used uint6_t and compile to 8/16 bit architecture the compiler could use uint8_t instead of uint6_t, or somehow simulate the uint6_t
>Just using int for both and letting the compiler figure it out is the reason C managed to outlive so many CPU architectures.
yes, great idea when you used int to store value 25500 and then compile the code to 8 bit architecture. how will the value 25500 fit into 8 bit int?
>>9631 >You are a fucktarded monkey.
>If you explicitly need 8 bits then explicitly use uint8_t.
you always explicitly need some bit size, unix idiot
you use ints to store values. for example you want an int that will be used to store values between 0 and 30000. that means you explicitly need at least 16 bits. if you just use int, when compiling to 8 bit architecture the software won't work
I don't know any situation where you wouldn't need specific int size
>>9641 this spec is shit though
why does it say int should be at least 16 bits when short int already is at least 16 bits?
why does 32 bit architecture makes int 32 bits instead of 16 bits? then programmers rely on this size, which will be different on another architecture
if we had architecture that had 6/12/18/24 bit values then it would make sense to use 18 bits for short int (bigger than 16 bits), but why use 32 bit for int when stupid spec says 16 bit?
also, it would be better if short int/int/long int didn't specify any bit size, but instead just guarantee range of values that you can store, then compiler would choose data structure and size for architecture you are compiling on, that will accept those range of values
pascal style languages let you make variables by specifying what range should it store (for example from 0 to 40000)
this is safer (there will be error when value is outside this range) but also allows compiler to choose appropriate low level structure to store it
Pascal is superior to shitty C
>>9646 >why does it say int should be at least 16 bits when short int already is at least 16 bits?
Because the PDP-11 was 16 bit. In C you originally had only char and int for the size of your integers.
>>9646 > why does it say int should be at least 16 bits when short int already is at least 16 bits?
Do I look like a boomer? For all we know, this might be from fun times of pre-standardized C.
There is a spec for portability. Use that if you want portability.
>why does 32 bit architecture makes int 32 bits
I don't know, but I would assume that 80386 CPU working in protected mode is using 32-bit registers all the way anyway and making him work with ax/bx/cd/dx/etc instead of eax/ebx/ecx/edx wouldn't make much of a difference. I don't know though.
>when stupid spec says 16 bit?
Spec says "at least". Surely you are a big boy and can comprehend what that means.
>but instead just guarantee range of values that you can store, then compiler would choose data structure and size for architecture you are compiling on, that will accept those range of values
It is actually what the page says. The range value takes priority and the bytesize is the assumption based on how most architectures store integers.
> Pascal is superior to shitty C
Pascal probably does this particular thing better probably, but the C compilers do that too if you pick an appropriate type.
>>9642 >I don't know any situation where you wouldn't need specific int size
Where you want the fastest arithmetic possible without caring about the size of the int and only care about the upper bound being at least 16 bits or whatever. Hardcoding the int size to 32 bits will be slow on 16 bit machines. I don't know all the architectures so who knows my point may be moot in practice (but I don't think so). TBH the only high perf C project I maintain right now is an image viewer and it needs a complete rewrite to support 16-bit machines with non-shit performance, so int didn't help here. I just went with uint32_t because anything smaller will make the program unusable.
>>9654 yeah i guess i should use uint_fast32_t
cant be 16-bit because that results in bounds being too small. so i simply dont support 16-bit arch for now (they will emulate 32-bit ints and be slow, but still never crash)
>>9617 >i have typed functional language with no text syntax. the editor works directly on the AST. it works perfectly
Visual programming languages work perfectly for teaching 10 year old downsyndrome potatoes how to code. For complex real world problems you need higher levels of abstraction. Nobody gives a shit if your compiler is "easy to implement". One day you'll grow up and realize everyone is not an idiot, it is you who doesn't understand.
>>9642 >why not?
Because rewriting the code for each architecture is the antithesis of portability. You might as well write assembly.
>the compiler could use uint8_t instead of uint6_t, or somehow simulate the uint6_t
Masking and shifting bits on every read/write is slow. The point of C was to have comparable speed to assembly and also be portable.
>yes, great idea when you used int to store value 25500 and then compile the code to 8 bit architecture. how will the value 25500 fit into 8 bit int?
The compiler will warn you when you overflow.
Why do you keep repeating the same arguments that have already been debunked?
>>9961 >you need higher levels of abstraction
And you consider lines of characters as high level? What he's suggesting is more high level then you. In your model to add something to a list is inserting a comma and then inserting ascii characters representing the number. When working on the AST you are just adding a number to a list. Notice how your way has to bother with things like working with characters which really is not relevant to what you actually want to do which is adding a number to a list.
>Because rewriting the code for each architecture is the antithesis of portability
You are not rewriting the code each time though.
>Masking and shifting bits on every read/write is slow. The point of C was to have comparable speed to assembly and also be portable.
Use the uint_fastX_t variant if you care about having fast operations than having a specific amount of bits of space.
>The compiler will warn you when you overflow.
No, for turing complete languages it is undecidable to prove if an addition will overflow.
>Why do you keep repeating the same arguments that have already been debunked?
Right back at you.
>>9661 >The compiler will warn you when you overflow.
Compiler will warn you only if it can statically tell that e.g a constant falls out of type bounds on the implicit type converstion.
<char notabyte = 1337;
In general, languages indeed cannot tell if some values overflow unless you implement the addition yourself.
I have actually written some stupid program that does "long arithmetics exponentiation" with strings (char pointers, dynamically allocated) that are assumed to be positive decimal integers. It works flawlessly within given limits (I have imposed some reasonable limits so that the length of a resulting string doesn't go completely bonkers), though it is painfully slow because the addition operation literally adds characters minus 0x30 and multiplying/adding is done in a stupid loop. No particular reason to mention it here, I just feel it's kinda relevant.
Adding
>unsigned char a = 222;
>unsigned char b = a + a;
Will immediately fuck you though.
>>9662 >And you consider lines of characters as high level?
Yes because you can express arbitrary levels of abstraction by mixing letters into unique names and at a much higher information density. That's the reason your college library isn't filled with picture books. They are all text, with visual representations only used to occasionally dumb down complicated concepts.
>In your model to add something to a list is inserting a comma and then inserting ascii characters representing the number. When working on the AST you are just adding a number to a list.
You'll understand better when you try to do something which is not already part of the language e.g. something that requires a type system like separating measurement units. You can hardcode the concept of integers into the language but a user can't express that inches cannot be added to centimeters in a visual way.
Those are some of the reasons why visual programming languages have only ever been successfully used in pedagogical contexts. They're good for teaching kids to solve very constrained problems in a simple way which doesn't require too much abstract reasoning ability.
For real world software development, visual languages have proven to be useless.
>You are not rewriting the code each time though.
It doesn't matter how you dress it up with preprocessor macros, you are writing two versions of the code, one for the 6 bit processors and one for the 8 bit processors. That's exactly what they were trying to avoid when C was invented.
>Use the uint_fastX_t variant if you care about having fast operations than having a specific amount of bits of space.
Doesn't work in a world where different CPUs have different byte sizes. If you are writing code in a 8 bit byte world, why are you using C?
>No, for turing complete languages it is undecidable to prove if an addition will overflow.
If 25500 is a constant then the compiler will always warn you at compile time. Your strawman is that a large number emerges organically at runtime through incremental additions or whatever and is also vital to the correct operation of the program. My guess would be that doesn't happen very often and was an acceptable tradeoff at the time.
>Right back at you.
You think C is dumb because the world was completely different 50 years ago when it was actually designed. Just like how the Egyptians were dumb for building pyramids out of limestone instead of re-enforced concrete. Fine, whatever.
>>9666 >You can hardcode the concept of integers into the language but a user can't express that inches cannot be added to centimeters in a visual way.
Such a language can probably have an elaborate type system too. I am not familiar with those though.
>Your strawman is that a large number emerges organically at runtime through incremental additions or whatever and is also vital to the correct operation of the program.
He probably thinks that a lot of programmers will assume (incorrectly) that int is 4 byte in size and will design their programs accordingly, which will break once they try to port it to 16-bit architecture. He is not far from truth but I guess not much of such programs actually get ported to 16-bit.
>>9670 >Such a language can probably have an elaborate type system too. I am not familiar with those though.
>probably
https://duckduckgo.com/ Go on then. I'll wait.
>>9666 >Yes because you can express arbitrary levels of abstraction by mixing letters into unique names and at a much higher information density
Much higher than what? Additionally, we are not storing random data, but rather structured data. The freedom plain text offers us is not needed.
>That's the reason your college library isn't filled with picture books
Then why do all of my textbooks include images?
>You can hardcode the concept of integers into the language but a user can't express that inches cannot be added to centimeters in a visual way.
You could give each type specific connectors that have to be matched. For example blender uses different color dots for different types.
>For real world software development, visual languages have proven to be useless.
Okay. I'm not sure why you are bringing in this strawman about visual programming though.
>you are writing two versions of the code, one for the 6 bit processors and one for the 8 bit processors. That's exactly what they were trying to avoid when C was invented.
No, you are not. For both you simply would write uint_fast6_t. This would work on both platforms.
>Doesn't work in a world where different CPUs have different byte sizes
Yes, it does. The X stands for the number of bits, not bytes. This means you could easily add constants for any bits you would like.
>If you are writing code in a 8 bit byte world, why are you using C
Exactly, C sucks.
>My guess would be that doesn't happen very often
Integer overflow bugs do happen in real code. As mentioned above it happened in bitcoin causing someone to turn ~$0.04 to ~12.91 billon dollars worth of bitcoin at the time.
>You think C is dumb because the world was completely different 50 years ago when it was actually designed
Yes, 50 years ago portability was much harder so you would expect features that support strong portability to be their since the beginning instead of having to wait for C99 to come around to add types guaranteeing a specific amount of bits.
>>8880 >Not knowing how to use Unix properly means the most prevalent operating system known to man is retarded not me.
>>8940 >>Name a superior alternative.
>Windows
>best solution is GUI solution
>>9168 >I can shit out an OS in SML or LISP in a day and it will be better in every way.
>This poster has wasted (24-06 = ) 18 attempts since posting.
<8ch mode="true">
If you were 1/10th as smart as you think you are this would ALREADY exist. It DOES NOT EXIST. Ergo faggot, you're NOT as smart as you claim, and every day for THE REST OF YOUR LIFE you wake up and this doesn't exist yet - remember how much of a dumb mother fucker you ACTUALLY are.
Now go make up some LARP excuse why you were not intelligent enough to manage your priorities to have made your superior OS yet - and never will.
</8ch mode>
Sage for low IQ OP bait thread
>>9661 >Visual programming languages work perfectly for teaching 10 year old downsyndrome potatoes how to code.
you realize visual programming as an abstract concept doesn't imply meme bullshit like if statements in lego blocks or flow charts, right?
it's hilarious that every time someone mentions visual programming, some nigger comes out spewing either your falacy, or "oh noes it makes it too easy to code i will lose my job"
>Nobody gives a shit if your compiler is "easy to implement". One day you'll grow up and realize everyone is not an idiot, it is you who doesn't understand.
what b8 is this? we need the compiler to be auditable, not a bunch of newgrads changing it as part of their thesis.
>One day you'll grow up and realize everyone is not an idiot,
yeahhhh, because the software industry is just a pinnacle of human innovation.
>>9666 im not going to bother debating another inane anti-visual-language argument, but written languages could just be due to the constraint that it takes longer to draw stuff than to write stuff. meanwhile, chinese does work and chinese characters are composed of smaller primitives, they are not purely atomic. but with a computer stuff (like trees or boxes) can be drawn instantly, so that constraint is no longer there.
>>9674 hello eric the 17 year old neckbeard. let me help you understand some of the flaws with your reasoning
1. you consider the glorious capitalist/consumer system (lol) to be of even a higher standard than anything produced by previous nation states
2. due to 1, you consider UNIX a sound decision. and/or you think UNIX is the best OS because it's th most popular.
3. you consider OS dev hard even though it's a trivial exercise, especially with standardized BIOS interfaces (which is cheating but you probably consider it not to be). yes, porting to 16 architectures or making 500 drivers takes work, but we're just talking about making the base OS here. drivers dont change the overall design much
4. due to 3, you think i am making a claim that i am "smart" because i implied that i can create an OS
>"oh noes it makes it too easy to code i will lose my job"
also, let me elaborate on this. as someone who has worked in 15 languages from assemblers to C to perl to haskell and designed 2, and a security analyst as well as a software engineer, im going to say programming languages are not easy, and anyone who thinks so doesnt know what they're talking about. even Java, even Java 6 or Java 7, are a language nobody will ever fully understand. you cant even audit this language because you dont know what all the syntactic (and semantic, e.g https://stackoverflow.com/questions/16159203/why-does-this-java-program-terminate-despite-that-apparently-it-shouldnt-and-d [1]) constructs do in each case. generics in java are an esoteric concept (who actually fully understands capture converison and its consequences, like 3 people in the world?) and after that it becomes an incomprehensible clusterfuck
so no, visual programming doesnt make it "too easy" to program, it (in the case of mine at least) makes it _possible_ to program. because now we can actually audit code and dont have to worry about memorizing 300 (this is not an exaggerated number for any modern PL) inane features that we will never use but need to be ready to recognize during an audit to avoid being tripped up by underhanded code
and where i talk of visual programming, i mean there is simply nothing to parse. so you dont need to emulate a stack machine in your head to process parentheses, you dont need scroll around a text file to find the start and end of an expression, and there are no ambiguities (there could be but in my PL there arent). and of course i have other features like a tiny grammar with 5 constructs (as opposed to ML and haskell needing like 30 just to have pattern matching), and content-addressable code/types instead of identifiers - both of these also help alleviate the problem im talking about with modern PLs being too complicated
1. also lol at the neckbeard who is going to start spewing out the implementation of his particular hardware and some shit about fences, instead of recognizing the problem here is that we need to know what semantics java guarantess in the presence of concurrency
>>9672 >Much higher than what?
>Then why do all of my textbooks include images?
Both questions were answered in the post you're responding to. You just truncated the quote for some reason.
>Yes because you can express arbitrary levels of abstraction by mixing letters into unique names and at a much higher information density. That's the reason your college library isn't filled with picture books. They are all text, with visual representations only used to occasionally dumb down complicated concepts.
>Okay. I'm not sure why you are bringing in this strawman about visual programming though.
You said diff'ing text is dumb and we should all switch to visual programming languages with dedicated AST differs instead. >>9586 Are you admitting you were wrong?
>For both you simply would write uint_fast6_t. This would work on both platforms.
Finding a lowest common denominator requires you to know all the platforms you want to support in advance. The whole point of pushing the size decisions to the compiler means you can invent any kind of CPU in the future and write a C compiler for it without changing the C language.
>>9677 >"oh noes it makes it too easy to code i will lose my job"
I'm saying the exact opposite actually. Visual programming is not expressive enough to replace text based languages in solving real world problems. It appears to be working for you now because you are creating the language and subconsciously know what kind of problem you want to solve with it. If you leave the language alone and try to solve a different problem with it, then you'll understand.
>yeahhhh, because the software industry is just a pinnacle of human innovation.
Like I said, at some point your chuuni phase will pass and you'll realize everyone is not an idiot and there's a reason why things are the way they are.
>>9673 >By "can probably" I mean that if none of the existing languages do, a new language can be conceived.
Why are you defending something you don't know anything about?
>fuck the experts who have studied this shit for years, I've done zero research but I'm the smartest and my opinion is...
You're just as bad as the other guy.
>>9704 >You said diff'ing text is dumb and we should all switch to visual programming languages
I never brought up visual programming.
>Finding a lowest common denominator requires you to know all the platforms you want to support in advance
No, the amount of bits you need is based off the problem you are solving and not the platform you are solving it on.
>Visual programming is not expressive enough to replace text based languages
Text based programming is a subset of visual programming.
>>9704 >Why are you defending something you don't know anything about?
I'm just being open-minded.
I like the idea of not being required to parse bracket sequences.
Ultimately, programming is not about learning the syntax and grammar, and the argument you make that some ASTs (abstract syntax trees?) are going to be worse than the others doesn't look legit at all. If a language can implement memory allocations, loops, branches, pointers, arithmetics, it already can do pretty much anything. If you want some elaborate type system, you can have it too, though you probably will have more problems visualising some ideas, but you can always just go with something stupid that looks like a table with certain rules, can you not? And it's not like it's prohibited to use text at all, so we can have tooltips, raw values and whatever else.
It will make things bloated though, but the contemporary development is inconceivable without bloated development tools at this point, and by that I mean "all big boys in the industry use it and you will use it too".
>>9677 > it's a trivial exercise
Known knowns:
1. You believe "unix/GNU/linux/FSF are the same, shit thing"
2. You claim you "can shit out an OS in SML or LISP in a day and it will be better in every way" because "it's a trivial exercise".
3. You keep coming back to this thread for weeks to bitch instead of making a replacement in a day.
Lel! Your logic is so whack you couldn't program for shit. As useless as a wooden furnace. Put up or shut up LARPer.
Post the code repo for your magnificent "trivial exercise" replacement or STFU.
>>9706 >I never brought up visual programming.
If you're not >>9586>>9605>>9617 then you still responded to arguments intended for that person, so it's a bit late to turn around and say this has nothing to do with you.
>No, the amount of bits you need is based off the problem you are solving and not the platform you are solving it on.
Your argument was to have types with *at least* X bits and then the compiler scales up to match the platform word size with no performance impact.
That only works if X fits in the smallest byte size on all platforms.
Otherwise you need to do multi byte operations to make your *at least* X bit types work on platforms with a smaller byte size, which is horrifically slow and introduces invisible concurrency issues because 100+100 is no longer a single CPU instruction.
>Text based programming is a subset of visual programming.
You're just embarrassing yourself now.
https://en.wikipedia.org/wiki/Visual_programming_language >In computing, a visual programming language (VPL) is any programming language that lets users create programs by manipulating program elements graphically rather than by specifying them textually.
>manipulating program elements graphically rather than by specifying them textually.
>rather than by specifying them textually.
>>9718 I made the first two and the post doesn't explicitly mention visual programming. Just that there isn't a syntax because you are working on the AST directly. It very well could be visual, but just working on an AST does not make it visual programming.
>bad performance if you can't natively hold the number
Bad performance is better than getting the wrong answer back.
>invisible concurrency bugs
Fair point. Maybe add a type that adds a lock on the value in that case.
>You're just embarrassing yourself now.
Text can be represented by graphics. It's the same reason a CLI/TUI are subsets of a GUI.
>>9723 >It's the same reason a CLI/TUI are subsets of a GUI.
It's not entirely correct. TUI is kinda like GUI because you're supposed to interact with pseudographical elements and because you output information visually too, like colormarking, boxes, etc.
CLI in its pure form is nothing like GUI though. CLI program basically can only wait for input in a form of characters and then shit out text-only output. CLI programs often work like one-shot events that don't take any input other than shell arguments. CLI programs' only interaction provided is often some sort of a shell loop where it expects some tool-specific commands.
CLI can indeed be a subset of a GUI program but CLI is not GUI. There is nothing GUI in entering line after line of text and hitting Enter.
>>9704 >They are all text, with visual representations only used to occasionally dumb down complicated concepts.
false since this implies it's always true instead of just can be true.
>The whole point of pushing the size decisions to the compiler means you can invent any kind of CPU in the future and write a C compiler for it without changing the C language.
>any kind of CPU
no, but a large amount of them (hopefully)
>...visual programming...
you seem to think "visual programming" means programming some non-general language, such as Prolog (for lack of a better example).
>Like I said, at some point your chuuni phase will pass and you'll realize everyone is not an idiot and there's a reason why things are the way they are.
you're just making it sound like you're one of them, unironically
>>9706 >Text based programming is a subset of visual programming.
based
>>9707 >It will make things bloated though, but the contemporary development is inconceivable without bloated development tools at this point, and by that I mean "all big boys in the industry use it and you will use it too".
graphical language doesn't imply bloat. you also almost hit the nail on the head here. they can make IDEs redundant. the whole reason for the IDE in the first place was to manage your imports and browse around code and such things. the other build script integration type stuff is largely irrelevant.
>>9709 i dont make such an OS because im making an OS to solve more problems than just being better than linux by a few inches
>>9718 non-sequiter. let's get one thing straight. your shitty text as a primitive doesn't exist. not only do you constantly change between ascii, ,cp1234, latin-1, utf-8, but they are all nothing more than a fixed set of glyphs.
>>9732 TUI doesn't exist. It's just a GUI. The terminal itself is a GUI as well. A CLI program is something that outputs text only without LARPing as a GUI via metacharacters.
Meanwhile pic related becase UNIX is so retarded and confused they cant even decide how input should work. Not only that but they can't even have a sane system like having a text field at the bottom of the terminal that doesn't get clobbered by stdout. Ironically in pic related, you can finally do this (almost) because when it gets into this shit mode, you can edit the text in that box.
>>9754 There are a definite series of principles that differentiate GUIs, TUIs, and CLIs. In principle, you could make tools that break out from the rules, but almost nobody does, so it's pointless to talk about. GUIs are about making all the options be buttons that people can click on, and using visual metaphors to represent the behaviour of the program. Command lines are about making as much capability as instantly accessible as possible, and about facilitating scripting. TUIs are somewhere (literally anywhere) in between the two.
You can pretend the difference is meaningless all you want, but it plays very strongly into the topic of this thread. GUIs, by their nature, have the capacity to be way better than CLIs, because they can implement everything that CLIs can, and can make it intuitive and easy to use. However, a bare bones GUI will instantly have far less facility than a bare bones CLI. The terminal provides functionality like copy paste, unix pipes provide scripting facilities, and so on. When writing a GUI, you need to do all this yourself, and will likely do it more poorly than existing solutions. That is: prefer to do one thing, and do it well, than to try and do everything, and fail at them all.
If you tried your visual programming experiment, you would be implementing, on top of a compiler, an editor, version control system, debugging faculties, and build system, from scratch. Existing solution have had massive amounts of work put into to them, so on day one you will be vastly worse than all of them in every way. The alternative is to pick the part of the toolchain that you think you can beat, and write a replacement for that. Necessary for this is that you interoperate with every other part of the stack.
>>9754 >graphical language doesn't imply bloat.
Maybe "bloat" is not the entirely correct way to put it, because "bloat" means "unnecessary pieces that got there because of the lack of better judgement" or something along these lines. However, such an editor will become a complicated integrated system as opposed to just a compiler, it will be more difficult to implement, to maintain etc.
There is nothing exactly wrong with that though imo, though you are likely to implement some human-readable serialization format for your "sources" anyway.
>TUI doesn't exist. It's just a GUI. The terminal itself is a GUI as well.
Well, we'll refer to it as "TGUI" then, whatever. The point is that it operates on characters, not on pixels, ever. It reduces the amount of shit programs have to do to make user and driver interactions (cli program in loonix just makes a write() syscall to write shit to the terminal for example).
>A CLI program is something that outputs text only without LARPing as a GUI via metacharacters.
Exactly.
> Meanwhile pic related becase UNIX is so retarded and confused they cant even decide how input should work.
GTK is retarded. Nobody should use this shit but people end up using it because FOSS devs use it for Gimp, for Furryfox, and basically since Furryfox uses it there is no point in not using this shit.
> Not only that but they can't even have a sane system like having a text field at the bottom of the terminal that doesn't get clobbered by stdout.
There is tmux or whatever other terminal multiplexer that has a status line of some sort, though I didn't mess with it. It probably cannot do much though because a C program has stdin, stdout, stderr, and that's it. Everything else will require additional deps to be installed. You could go the TUI route and provide some fixed screen space for some status messages, wiping it clean and whatnot.
Anyway true, the C model of making CLI doesn't have to be a standard, it's just there for historical reezuns.
>There are a definite series of principles that differentiate GUIs, TUIs, and CLIs. In principle, you could make tools that break out from the rules, but almost nobody does, so it's pointless to talk about.
No, there is only GUI and CLI. And most CLIs are not even real CLIs.
>GUIs are about making all the options be buttons that people can click on
...and control via keyboard
>Command lines are about making as much capability as instantly accessible as possible,
CLI is merely a REPL. In UNIX, it's literally just the REPL of bash or sh or whatever. Whatever is added on varies from system to system.
>and about facilitating scripting.
No, that is not part of it, but you can pretend that and have lots of broken integration (shellshock, recent GPG bugs) up to string injection vulns. This is the most retarded shit on the planet. It's literally: a developer adds a new command to his shitty "CLI program", with some user-friendly syntax, as opposed to dev-friendly syntax (such as having a fucking standard language instead of defining a new language for every fucking argument in every fucking program). And throw in some envar bullshit and pwd bullshit et voilla unix braindamage.
>TUIs are somewhere (literally anywhere) in between the two.
No they aren't. They're just GUIs. The only difference is they're limited in what they can do, since the output is some bullshit instead of pixels. The only useful point of any TUIs is that it's faster than any modern UX'd GUI bullshit. Let's not go into the retarded use case of SSHing into a machine and using the TUI over SSH.
>GUIs, by their nature, have the capacity to be way better than CLIs, because they can implement everything that CLIs can, and can make it intuitive and easy to use.
Nigger you're dropping misconceptions here and there by the hundreds. GUIs aren't to make things "simple". They are a means to an end. I have an image viewer with wasd to pan and +/- to zoom. Is this to make something """simple"""? No, it's to let me pan/zoom. The alternative would be to run a command to somehow show the image (which you can't even do in a UNIX terminal without massive hacks, but you can do easily in my language's "REPL" since the output primitive is values and the primitive for display values is pixels) and restart the command 10 times:
./nigchink --panx 100 --pany 300 --scale 1.5
oops needs to be a little more right
./nigchink --panx 150 --pany 300 --scale 1.5
oops needs to be a little smaller
./nigchink --panx 150 --pany 300 --scale 1.4
etc.
>However, a bare bones GUI will instantly have far less facility than a bare bones CLI.
Apples to oranges. Obviously bash has more functionality than my image viewer. Now as for graphical programming languages, yes bash still has more features than my PL, because bash is a bloated piece of dickaids. I don't want fucking envars and $CWD and the other thousands of bash features that can blow up in my face. I want a high level programming language. And I have it. My language is general purpose, and console-friendly, where bash is only for trivial programs for fucking with strings and creating vulnerabilities, and mostly fails at being console-friendly, and like everything else in UNIX, there's no alternative, so this barely useful thing is considered the standard and mindlessly propogated throughout the decades.
>If you tried your visual programming experiment
I already have my "experiment". All that's left is to create an SML-like module system (this is like Java interfaces or Haskell type classes, don't think this is some sort of packaging bullshit) and add type polymorphism (currently it is lol no generics).
>The alternative is to pick the part of the toolchain that you think you can beat, and write a replacement for that. Necessary for this is that you interoperate with every other part of the stack.
the only way out of UNIX is to start from scratch. anything that tries to integrate with that garbage will be crippled to hell.
all this FUD about "LOL ITS A VISUAL PL IT WILL BE BLOAT/TOY" is pointless btw. the very first goal of my project is that a single person can implement the entire OS/PL himself given a spec. so no, i wouldn't put any feature in if it was bloat.
>>9770 >it's visual therefore it's bloat
nigger if i write a program in haskell that calls a foreign function to draw a pixel to the screen. and call that 2 million times to paint the entire screen. this will still be orders of magnitudes faster than pressing a button in a GNOME program.
>However, such an editor will become a complicated integrated system as opposed to just a compiler, it will be more difficult to implement, to maintain etc.
Why? because people can't monkey patch bullshit into my system?
>There is nothing exactly wrong with that though imo, though you are likely to implement some human-readable serialization format for your "sources" anyway.
No, I am not, I am against "human readable" bullshit. You deserialize the data into "live values" which the programming language can already display (the same way any shit REPL displays data).
get it?
I go to f2n3gt23789gn2g23gh.onion:1234 and send 1101010101101010101 and receive 101010101011010 and it displays in my console like:
Coordinates 1243.51 41251.55
I go to gnw78gb23n9g.onion:4235 and send 1010101010010101101011010100101 and receive 010110101101010101010010101010101010 and it displays in my console like:
[Thing 1.2 123 A
,Thing 1.2 551 B
,Thing 1.2 551 B]
why the fuck do I need to see the data in "human readable" form? if it's an invalid value it will just be an invalid value and the service can fuck off with their incompatible cancer
>for your "sources" anyway.
wait do you mean source code? no, source code is just more values
data A = A Int | B Int
with names removed would be
data 0x5718235125125 = 0x16251785612756125 0x12125125125125125 | 0x16251785612756125 0x12125125125125125
(0x5718235125125 as a name doesn't really exist either, it would be a hash of the entire type)
TUnion [Name 0x16251785612756125 0x12125125125125125, Name 0x16251785612756125 0x12125125125125125]
and we just serialize this into 101010101000101010
>Whoops, replied to the wrong pic. Surpisingly, most of the points still stand. Not that I entirely get your point though.
That terminal emulators are retarded braindamage that don't even provide a separate input field from the output field?
Also in UNIX their shitty excuse of a CLI is not simple stdin/stdout/stderr, but a full blown GUI manipulation system via metacharacter hacks (used by ncurses and such). Which is the excuse for not having a separate input field.
>>9568 Backwards compatibility with software is an absolute must and through all of software history it has been broken just to allow the Jew to sell more hardware.
However, software backwards compatibility just means being backwards.
>hurr let's break software because broken software exists
>>9956 >GUIs aren't to make things "simple". They are a means to an end.
The point of GUIs is to be discoverable. When the user doesn't know what to do they can just click around to find what works. With a CLI you need to read the --help or man page otherwise you don't get anywhere. But you can't script a GUI. Once you know what you need, a CLI is much faster.
That's why GUIs are popular in non-technical "user friendly" contexts like grandmas and CLIs are popular in professional contexts like sysadmins.
>why the fuck do I need to see the data in "human readable" form?
>the service can fuck off with their incompatible cancer
The point of structured formats like JSON is that they are *more* compatible than an ABI which is what the first example is.
>>9961 >Backwards compatibility with [hardware] is an absolute must and through all of [hardware] history it has been broken just to allow the Jew to sell more hardware.
Intel processors are backwards compatible with everything down to the 8 bit 8086 from the 1970s. That's the reason CPUs need to be switched from real mode to protected mode by the bootloader everytime they boot.
>Try reading more and posting less.
I'm disappointed little grasshopper, you don't have enough experience or knowledge to be posting on a big boy /g/ board like this. The least you can do it try to learn from others instead of just making shit up.
>>9956 >CLI is merely a REPL.
So, what's your point in general regarding this "CLI business"? That REPL is not CLI or what exactly?
>the very first goal of my project is that a single person can implement the entire OS/PL himself given a spec
No shit. Leenus did it, you can do it too. And there are more alternative OSes and meme OSes out there developed by tiny teams. Virtually nobody uses those OSes though.
On a side note, how many hours did you put into your project and how much work do you have left?
>i wouldn't put any feature in if it was bloat.
Will you put USB/SATA controller drivers and HDMI drivers and a TCP/IP stack in or is it too "bloat"?
>nigger if i write a program in haskell that calls a foreign function to draw a pixel to the screen. and call that 2 million times to paint the entire screen. this will still be orders of magnitudes faster than pressing a button in a GNOME program.
Haskell is not visual though. And having a good optimizer in your code is technically bloat too. And pushing a button that could do anything has nothing to do with dumping stuff into a framebuffer. So, your point?
>Why? because people can't monkey patch bullshit into my system?
The other post has it: "you would be implementing, on top of a compiler, an editor, version control system, debugging faculties, and build system, from scratch". If you already did it, how much of the dependencies did you implement? Forget about OS facilities, obviously, but your language has to have its own graphical primitives or whatever you call them and you have to implement a generic way things get put into a framebuffer. I mean, not literally how to dump shit into the FB, but what exactly you dump into there.
Compared to that, C language core is tiny.
>No, I am not, I am against "human readable" bullshit. You deserialize the data into "live values" which the programming language can already display
So, you want data parsing integrated into the language or what do we see here?
>it would be a hash of the entire type
So, you bring some hash implementation into your language. I'm just curious, how do you deal with collisions?
>source code is just more values
Well, you kinda serialized it already, just to show me. Are you sure this is not going to be useful for whatever bugs you might have in your language, at least? Or do you intend to sit with a hex editor all day every day?
Or putting the development aside, do you intend to read it on "your language"-capable and -enabled machines only?
>>9963 The point of GUIs is to be discoverable.
No, that's the point of "user friendly" GUIs.
>When the user doesn't know what to do they can just click around to find what works. With a CLI you need to read the --help or man page otherwise you don't get anywhere. But you can't script a GUI. Once you know what you need, a CLI is much faster.
Well yeah if your entire thesis in life is
>ooga booga me have CLI me superior to GUI user
then its going to be hard to discuss the topic at hand
>That's why GUIs are popular in non-technical "user friendly" contexts like grandmas and CLIs are popular in professional contexts like sysadmins.
>The point of structured formats like JSON is that they are *more* compatible than an ABI which is what the first example is.
Wrong again. JSON is ill-specified and every implementation is slightly different. Just because you can encode something that passes the parser even though it's invalid, doesn't mean your shit is magically more compatible than some ABI. I use a prefix code for serializing values and you wouldn't ever want anything different (since it already encodes every possible value and there's no need to change it), unless you wanted some very specific encoding for efficiency purposes, in which case JSON, ASN.1, BSON, Protocol Buffers, XML, etc wouldn't be a choice either.
>>9965 >So, what's your point in general regarding this "CLI business"? That REPL is not CLI or what exactly?
a few things:
-CLI is where you input a "command" and it outputs stuff
-UNIX crap has no real CLI since they cheat and modify the entire terminal window via metacharacters
-there is only CLI and GUI. TUI is bullshit
-i just call it a console in my OS, since what you're doing is constructing a piece of code and running it. why should I call this a "command"? I don't even have commands, just a PL and you can define new functions and call the from the console or from other code
>On a side note, how many hours did you put into your project and how much work do you have left?
A couple hours a day for a few months, and I already have a working PL rendering via the framebuffer. I didn't even need a line of C aside from framebuffer junk. After the language is finished I will bootstrap it in assembly, make a simple disk format (but not files/folders like UNIX braindamage), make a boot disk, setup some simple VESA BIOS code for framebuffer, and say goodbye to UNIX forever. Then make some reverse engineering tools (finally no more UNIX braindamaged ollydbg, IDE Pro, radare, whatever the fuck) and replace the BIOS and boot from before the BIOS and laugh at Linshit/Windows machines that take 40 seconds to boot their bullshit C code.
>Will you put USB/SATA controller drivers and HDMI drivers and a TCP/IP stack in or is it too "bloat"?
Most driver shit is better done in a real language instead of C. The entire USB stack in every modern OS is an exploitable piece of garbage for no reason. Only because they used C to parse a bunch of data in paths that are not even performance critical. The Direct Rendering Manager is like 50K lines of C just to move a few datastructures around. And like all such crap it's poorly documented and they document all kinds of concepts they created as if they made a new and novel type of system. And after 5 minutes of looking through it, I found a case where it kernel panics from a simple userspace call (surprise surprise). TCP/IP is bloat but I still will support connecting to modern devices and internet regardless of what replacements I choose to make. This will not be "in kernel" though, it will be essentially the same as a userspace implementation. I will only use a limited amount of assembly and everything else will be in my PL. If someone wants to port this to a different arch, he can do it himself, which is the whole point of this system. I will have no web browser (aside from some trivial crap like Links), just crap to scrape websites and extract the useful data.
>Haskell is not visual though.
My PL is a general purpose one, like SML. It's not somehow magically slow just because it's visual. It compiles to the same shit.
>So, your point?
My point is that it's impossible to be more bloated than something like GNOME or KDE. Just use a rectangle drawing function with a C/assembly implementation and we're already fast enough forever even with an interpreted language.
>And pushing a button that could do anything has nothing to do with dumping stuff into a framebuffer.
>could do anything
Literally 99% of buttons in any modern GUI are so this fucking slow to react. It's literally faster to go into the worlds slowest language and make 2 million distict FFI calls.
>And having a good optimizer in your code is technically bloat too.
I agree, I wont have one.
>The other post has it: "you would be implementing, on top of a compiler, an editor, version control system, debugging faculties, and build system, from scratch". If you already did it, how much of the dependencies did you implement? Forget about OS facilities, obviously, but your language has to have its own graphical primitives or whatever you call them and you have to implement a generic way things get put into a framebuffer. I mean, not literally how to dump shit into the FB, but what exactly you dump into there.
If I have a list of 1,2,3, it will be drawn as (in textual syntax)
cons(1, cons(2, cons(3, nil)))
but you can extend the "syntax" (which is a much simpler task here than in a textual language) to make it show like
[1,2,3]
the primitive for drawing is just displaying values. you can draw rectangles and stuff too which is lower level. no GUI toolkit yet (for windows/buttons, etc), and hopefull never needed
>Compared to that, C language core is tiny.
C is not tiny compared to anything, nor is it with whatever libs you consider "core"
>So, you want data parsing integrated into the language or what do we see here?
it can be as a library or not, it doesnt matter. the point is that there's an easy and optimal way to serialize shit and no reason to ever use anything else. when talking to remote services or the disk, theres no reason to even consider how its serialized, we just use this same format everywhere. like java, php, ruby etc except not an utter failure and with none of this RCE vulns bullshit from doing retard shit like putting "classes/functions" into the serialized data.
>So, you bring some hash implementation into your language.
yes, and you need it anyway for communication. the OS is not for personal use, its for programming in the large. to download code from someone you already need all cryptographic primitives. and they will be reused for all communication stuff.
>I'm just curious, how do you deal with collisions?
make it crash. dont make critical code assume they dont exist. also using strong hashes instead of >hurr durr let's use md5 or sha1 in the year 2000 because it's not completely broken yet
>Well, you kinda serialized it already, just to show me. Are you sure this is not going to be useful for whatever bugs you might have in your language, at least? Or do you intend to sit with a hex editor all day every day?
You will never need to hex edit shit. Prefix code generation is more simple than JSON parsing.
>do you intend to read it on "your language"-capable and -enabled machines only?
Yes. When it's finished and public, users will post pictures or run some heathen tool to turn shit into text to post on plebian legacy webshit venues.
>>9969 OK you answered a lot of my questions, thank you.
> -UNIX crap has no real CLI since they cheat and modify the entire terminal window via metacharacters
Except for all the cases they don't?
>why should I call this a "command"?
You don't have to call it a "console" either. Console is like a device consisting of a screen and a keyboard or something.
Well, whatever, not that this stuff really matters.
>After the language is finished I will bootstrap it in assembly, make a simple disk format (but not files/folders like UNIX braindamage), make a boot disk, setup some simple VESA BIOS code for framebuffer, and say goodbye to UNIX forever. Then make some reverse engineering tools (finally no more UNIX braindamaged ollydbg, IDE Pro, radare, whatever the fuck) and replace the BIOS and boot from before the BIOS
OK, hopefully we'll see you in a few years with a superior system.
>take 40 seconds to boot their bullshit C code.
Nah it takes much less than that.
>It's not somehow magically slow just because it's visual.
I didn't imply that. It can be considerably slower on machines with weak graphics though.
>It compiles to the same shit.
Is your code generation the same quality of Haskell's? Cool. I mean, AFAIK Haskell code is still a bit slower than optimized C shit but it's pretty darn fast.
>My point is that it's impossible to be more bloated than something like GNOME or KDE.
I think most of the slowness in unix gui comes from the X Window System. The state of X is just sad and there's like only 5 people on the planet knowing exactly what it (Xorg) does.
Every DE or whatever application has to run on top of it and talk X protocol (via libs or directly).
>I agree, I wont have one.
Your code may become significantly slower.
Anyway picking an appropriate integer/floating point type and possibly appropriate instructions for certain tasks is also a job for an optimizer.
Though again, probably I wasn't entirely correct, as a separate optimizer works on the ready-to-run code while you are going to make that decision during compilation (it's still complicating the codebase). How many passes BTW?
>If I have a list of 1,2,3, it will be drawn as (in textual syntax)
Out of curiousity, how is the program that fills N consecutive nodes of a list (where N is a positive integer) and then assigns some specific values M, O and K (of some appropriate type) to specific nodes R, G and L (all less than N) going to look in that language. And most importantly, how is it going to be entered by user into your editor?
>the primitive for drawing is just displaying values
I immediately think whatever GUI advantages your system might have are waning. What you propose is basically ditching the 7-bit ASCII coding as the building base for programming languages. "Just display them values lol". Where is the flowchart or whatever for algorithm coding?
>C is not tiny compared to anything, nor is it with whatever libs you consider "core"
C compiler + standard library are going to be small (well realtively featured but non-mainstream pcc+musl for i386 are going to take about 10 megabytes in source codes), which is probably lesser than what yours is going to take.
>we just use this same format everywhere
Sure, everything would be simple if we used the same shit everywhere, regardless of complexity because we can just reuse the code for that shit. We don't though.
And you could do this with C too btw. Just implement the library "some generic data types" (will work as an interpreter for some shit programmer puts in to convert it to binary objects with certain properties) or whatever, build the formatting guidelines into compiler (it warns about printf format string after all lol), force everybody to use it, win. Making everybody migrate to this shit won't work out though. Also it would be uglier than some actual language with that built-in, but industry doesn't give a shit about beauty.
>yes, and you need it anyway for communication.
Regardless of that, you bring that shit in. You will also use it to calculate probably thousands of hashes during compilation lol. All because you're opposed to assigning some ids for some reason.
>You will never need to hex edit shit.
Ehhh, this is not the way to plan for shit like this IMO.
Consider the following: your parser is buggy, therefore you fucked up the parsing, therefore executables are garbage and your editor shows garbage too. Or maybe your editor is buggy and you see some unreal shit, or write some bullshit into source files, fuck if I know.
>When it's finished and public, users will post pictures or run some heathen tool to turn shit into text to post on plebian legacy webshit venues.
Big D for Big Delusions Anyway I don't mean to shoot you down. I think you or maybe somebody else can benefit from such a language.
>>9977 >Is your code generation the same quality of Haskell's?
no and i dont have laziness/non-strict semantics. and anything that optimizes will probably be faster in most cases than my compiler (that said my editor/console which are the main mode of the OS, will still be faster than any gnomeshit)
>I think most of the slowness in unix gui comes from the X Window System.
nedit (a text editor made with the motif toolkit) is orders of magnitudes faster on my 1300MHz machine than anything ever in GTK or KDE. For me X11 is slow just to show a window (on the order of 10s of milliseconds, whereas gnome stuff can take actual entire seconds to do trivial tasks), but I don't know if this is ratpoison's problem or X's.
>Your code may become significantly slower.
it's possible. but at least i wont have to strain my wrists typing 40 commands and eyes reading 40 man pages to do some trivial task securely in UNIX.
>How many passes BTW?
what is a pass? all i care about is generating some assembly that compares cons and nil as integers instead of interpreting and flinging strings around.
>Out of curiousity, how is the program that fills N consecutive nodes of a list (where N is a positive integer) and then assigns some specific values M, O and K (of some appropriate type) to specific nodes R, G and L (all less than N) going to look in that language.
I can't post real examples but I can say it's basically like SML with boxes instead of text and a much simpler set of constructs. The program you ask for is kind of ambiguous, but here you go: I create a list of strings and simply have some that are "M","O", and "K", and replace those, by index. Or do you want to see something with pattern matching or nested data structure expressions? To keep this code short, I use SML's ints, strings and equality types (implicitly), print, none of which exist in my language, especially not "\n".
datatype 't List = Nil | Cons of 't * 't List
fun setList (e, 0, (Cons(_, l))) = Cons(e, l)
|setList (e, i, (Cons(f, l))) = Cons(f, setList(e, i-1, l))
fun main () =
let
val l0 = Cons("A",Cons("B",(Cons("C",(Cons("D",(Cons("M",(Cons("O",(Cons("Lol",Cons("K",(Cons("E",Nil)))))))))))))))
val _ = printList(l0)
val _ = print "\n"
val l1 = setList("!",4,setList("@",5,setList("^",7,l0)))
val _ = printList(l1)
val _ = print "\n"
in ()
end
val () = main ()
$ mlton a.sml
$ ./a
A,B,C,D,M,O,Lol,K,E,.
A,B,C,D,!,@,Lol,^,E,.
>And most importantly, how is it going to be entered by user into your editor?
It's basically like using a text editor, except instead of moving glyphs around, you can move syntactic constructs around. For example
let val a = f(1)
val b = f(2)
val c = f(3)
in [a,b,c]
you could copy out f(2) and put it in the c line, resulting in:
let val a = f(1)
val b = f(2)
val c = f(2)
in [a,b,c]
or you could rename "c" to "ca" (and only have to do it once since we use content addressable identifiers: select the c in "val c" and hit a hotkey to begin renaming, type the new name, and hit enter, and every single piece of code everywhere is update and guaranteed to be correctly renamed)
let val a = f(1)
val b = f(2)
val ca = f(2)
in [a,b,ca]
You cannot do something like entering invalid syntax.
let val a = f(1)
val b == f(2)
val c = f(3)
in [a,b,ca]
Because there is no syntax to even edit. It's more like boxes. A let construct would look more like
---------
|a|f(1) |
|b|f(2) |
|c|f(3) |
---------
|[a,b,c]|
---------
You can't remove some line of the box like this:
---------
|a|f(1) |
|b|f(2)
|c|f(3) |
---------
|[a,b,c]|
---------
The "let" construct is atomic and a primitive, of the PL and the entire OS.
I know there is some "movement" of "structural editors" but I haven't compared them to my own stuff yet, and they will be crippled since they deal with existing languages such as Java.
>I immediately think whatever GUI advantages your system might have are waning.
The advantages are things like being able to securely and easily transfer values from one program to the other without any thought, being able to audit code (the first language that supports this btw), being able to read code in 2 seconds instead of executing a stack machine in your head to figure out where the parens start and end, etc. It's not a GUI for the sake of being a GUI.
>What you propose is basically ditching the 7-bit ASCII coding as the building base for programming languages. "Just display them values lol". Where is the flowchart or whatever for algorithm coding?
Yes, ASCII is not a real primitive.
>C compiler + standard library are going to be small (well realtively featured but non-mainstream pcc+musl for i386 are going to take about 10 megabytes in source codes), which is probably lesser than what yours is going to take.
my language has 5 constructs right now for code and 2 constructs for types and it can do everything SML can do. after adding type polymorphism it will have 1 or 2 more. after adding modules, it may have maybe 10 more. meanwhile C probably has 100 different constructs even if you limit to ANSI whajamacalit/C89. libraries in a polymorphic language with actual real shit for programming in the large (like - from best to worst - SML style modules, haskell's type classes, rust's traits or java/C# style classes) are actually useful and composable, unlike in C, and don't require 100 lines of setup shit like #ifdef this //derp.h #ifdef that and fucking with linkers and build systems just to do something as trivial as a tree data structure over arbitrary types
> And you could do this with C too btw.
and you could use Google Protocol Buffers in C and it is dog shit and will be no matter how integrated it is. in any high level language i can call functions like f(A); f(C); f(B(X(1,2,3),Y(4,4,1))), using a remote service should be as easy as sendToService(A); sendToService(C); sendToService(B(X(1,2,3),Y(4,4,1))) (here B,X,and Y are data constructors, not functions. anyone reading can consider them as dicts or structs or whatever if they dont know what a data constructor is), and all semantics should be preserved. the service cannot receive C(1), it would simply be unable to parse that and ignore or kick the user off or whatever.
>You will also use it to calculate probably thousands of hashes during compilation lol.
its incremental so even when they need to be calculated its negligable
>All because you're opposed to assigning some ids for some reason.
namespaces are the cornerstone of unix braindamage. in my systems (including stuff i plan to have on top the OS) you cant have name conflicts, anywhere. not in the language, not in the file system, not in the communication protocols. see http://www.skyhunter.com/marcs/petnames/IntroPetNames.html >Consider the following: your parser is buggy,
i dont have a parser. there is only serialization via prefix-free code. and it's quite simple since data can only be a variant or record (in the case of record all you do is seralize each field and concatenate them in a well-defined and automatically-defined order)
if you have a type that can only be A or B the values can only be 0 or 1
A, B, C? 0, 10, 11
A, B, C, D? 00, 01, 10, 11
A, B, C, D, E? 00, 01, 100, 101, 110
if you fuck up implementing this - one of the most simplest tree algos ever - you can't fucking program tree algorithms and should just give up making a language/OS period.
>Or maybe your editor is buggy
That can only result in "invalid" values, not malformed serialization.
A hint is that you can't deseralize a value unless you know the type.
E.g say we have a type named Blah (in SML syntax):
datatype Blah = A | B | C | D | E
and the proper serialization of each value is:
A 00
B 01
C 100
D 101
E 110
let's say we have some function that outputs something of type Blah, but its only expected to output A, C, or D. but it mistakenly outputs E
then the serialized value of E will be 110, and... 110 decodes as E
so there wouldn't be any hex editing involved in fixing this bug, just finding which code outputs an E
anyway the only reason off the top of my head why you would need to debug serialization is because of a broken channel or data recovery. in fact a prefix code may make data recovery harder. but i dont care. this is not a goal and i wont cripple the rest of the system to support this.
>>9977 >>10022 >It's not a GUI for the sake of being a GUI.
In fact here's an important point:
All I did here is implement a console.
The console of UNIX (in practice) is some shit that draws text resulting from bash commands.
The console of my OS is something that edits and runs code of my "visual" language and outputs results.
In the same way as UNIX, but the results are values instead of text. And this is the primitive of the entire system. And the goal is for this to be the primitive of the entire system and to work well for every use case.
All I do is copy a bunch of boxes around, press the execute button, and new boxes appear in the list of outputs in the console, and i can copy those values into other expressions or functions or whatever.
>>9969 >ooga booga me have CLI me superior to GUI user
You can store arbitrarily complex sequences of CLI commands in a text file and replay them whenever you want. Or email them to someone else.
If you try to record mouse movements for a GUI it is cumbersome and fragile because they depend on the window size and font size and window placement etc. it just doesn't work. For repetitive tasks you are forced to move the mouse around and click the same things manually like a monkey.
And to explain to someone how to do something in a GUI you need to record video or
a sequence of screenshots which again, is far more time consuming and trouble for both sides than just copy and pasting text.
GUIs are optimized for ease of use if you don't know what you want.
CLIs are optimized for fast use if you *do* know what you want.
>then its going to be hard to discuss the topic at hand
What makes it difficult to discuss the topic at hand is when you make up inane strawmen like the one above.
>Just because you can encode something that passes the parser even though it's invalid, doesn't mean your shit is magically more compatible than some ABI.
The reason JSON is more compatible than an ABI is because you implement a JSON parser once instead of every application having a unique format which needs to be forced into shape with awk,sed,etc. before it can be passed to the next application in the pipeline.
That's why e.g. radare2 and FreeBSD are moving to JSON outputs.
>>10031 >You can store arbitrarily complex sequences of CLI commands in a text file and replay them whenever you want. Or email them to someone else.
And to copy part of the command into your shell you need to clean it of metacharacters to reduce chances of your garbage shell getting pwn3d. And have fun looking up man pages for these 5 different programs with different input/output syntaxes, composing them with shitty languages like awk,grep,head,tail,tr,sed,bash,sh,whateversh, and looking up the other 3 commands online now that it's a fad to not even have real documentation ship with your project.
You can copy/paste an expression of my PL into a document (just, it wont be text, it will be a value instead), and even better, it's guaranteed to be the exact same thing on any machine since identifiers are by content. There are no commands in my system. There is only one PL. You can learn the PL in 5 minutes and every single function you will know 100% of the meta for (a property that UNIX can never dream of) and the documentation is done via haddock/javadoc/pydoc/etc-style attached to code.
>GUIs are optimized for ease of use if you don't know what you want.
I cant remember what your part of this thread was about. Are you rebuttling my graphical language? This argument is incoherent to it. Just because its graphical, it doesnt suddenly lose all desireable properties of a programming language or REPL. And if you're arguing that everything should be CLI, this is also false. How can you draw on a computer (as in actually drawing, not composing boxes or vector graphics) without an interactive program. You literally can't. You need a high framerate low latency program with a good input (pen-like but even mouse-like can work) with high sample rate.
>CLIs are optimized for fast use if you *do* know what you want.
CLIs are bad for even that. The moment your one line becomes more than a few expressions it becomes hell to edit them. You have to hit right and left a million times, go back and forth from one end of the expression to another to add brackets, etc. You eventually have to pull out the text editor. Then there's the issue that you have to manage your imports which turns retarded half the time as well. Then you realize you can't paste stuff in all the time (because it literally doesnt work in some cases, like in Haskell or CoffeeScript for example, you cant paste in certain constructs) and have to make a file. This was another motivation for switching to graphical PL. Now I can just infinitely extend my expression and if I find it worth saving hit the save button and type a name and enter. I don't have any imports to manage since content-addressing is used instead of identifiers.
>What makes it difficult to discuss the topic at hand is when you make up inane strawmen like the one above.
It's not a strawman. you are not one up. you are 4 down. You think you are providing sage information by stating the same old CLI vs GUI crap that's been said for 30 years.
>The reason JSON is more compatible than an ABI is because you implement a JSON parser once instead of every application having a unique format which needs to be forced into shape with awk,sed,etc. before it can be passed to the next application in the pipeline.
The same can be said of any format. But actually, you're only partially right, since JSON is ill-specified and every implementation has subtle differences. On top of this I wouldn't want an untyped language for communication between processes and network services, that seems kind of retarded.
>>10102 >And to copy part of the command into your shell you need to clean it of metacharacters
If what you're copying is actually a command then it doesn't have metacharacters.
If your point is bad commands do bad things then duh.
>have fun looking up man pages for these 5 different programs with different input/output syntaxes
What did I say? *Fast* to use when you know what you're doing. Not *easy* to use for novices.
>Are you rebuttling my graphical language? This argument is incoherent to it.
Nobody gives a fuck about your made up language you're too scared to link to.
>CLIs are bad for even that. The moment your one line becomes more than a few expressions it becomes hell to edit them.
Sounds like you need to learn about emacs/vi keybindings.
>It's not a strawman. you are not one up. you are 4 down.
Saying one is better than the other *is* the strawman. Nobody is saying that except you. If using a GUI makes you feel inferior to Unix weenies then talk to a shrink about it.
>If what you're copying is actually a command then it doesn't have metacharacters.
WTF are you talking about. If you copy from webshit, you can't be sure you're copying what you actually see. Whether there can be metacharacters or not in the clipboard depends on X/Wayland/gpm and probably other details, and is definitely not standardized or reliable even if you do find a case where it's "safe".
>Sounds like you need to learn about emacs/vi keybindings.
how does that help me while i'm running shit in a REPL? inb4 you show me some garbage hacked together way (which lacks any well defined deliniation) of using vim/emacs to edit the input line of a REPL
>Saying one is better than the other *is* the strawman. Nobody is saying that except you. If using a GUI makes you feel inferior to Unix weenies then talk to a shrink about it.
No? I argue graphical languages are better than textual. This has nothing to do with CLI vs GUI. YOU are the one who keeps edging on (but not explicitly stating) the standard "i am UNIX programmer i know the CLI GUI is toy crap". Graphical languages can do anything a CLI can do the moment they have a REPL/shell/console/whatever.
>>10124 >If what you're copying is actually a command then it doesn't have metacharacters.
This is only true if you block BOTH javascript and css. Javascript can mess with your clipboard and css can obscure what you are actually highlighting. Here is an example of one which only needs css enabled to work.
https://thejh.net/misc/website-terminal-copy-paste
>>10168 The most you could probably get away with is with some sort of new line making it so a part of command was ran actually as a command. It is much more limiting and less sneaky.
>>10170 no, you could re-enable CSS and failing that do half the shit CSS could do by exploiting bugs and unknown features in the browser. and this will always be the case since
A) the browse is pure unbridled bloat and they dont give 2 shits about making something small and understandable
B) it's not their job to pander to the broken terminal by removing anything that could be harmful when pasted to it (however i still dont agree that its a good design for copy to see stuff other than what you can see)
>>10162 >If you copy from webshit
I think I found your problem.
>how does that help me while i'm running shit in a REPL?
So you're the type of person who holds down the left arrow key for 30 seconds to get the start of the line and then assumes bash and everyone who uses bash are the stupid ones not you. Keep doing that.
>No? I argue graphical languages are better than textual. This has nothing to do with CLI vs GUI.
What do you think this thread is about? >>8940 You are the one who hijacked it with this visual programming shit no one cares about.
>YOU are the one who keeps edging on (but not explicitly stating) the standard "i am UNIX programmer i know the CLI GUI is toy crap".
>but not explicitly stating
That's because it's a strawman that you are projecting.
>>10183 >You are the one who hijacked it with this visual programming shit no one cares about.
The thread went something like this
>unix is shit because contrived curl problem
<here are 5 ways to solve that the unix way
>command line sucks gui is better
<here are things command line is better at
>plain text is a limitation
<plain text is more productive
>########### MUH VISUAL PROGRAMMING AUTISM ############
<thread derailed
>>10183 >>If you copy from webshit
>I think I found your problem.
No, this was already discussed (ITT or another thread on /g/ here). That's not the problem and you're retardedly oblivious and your shit is insecure if you think otherwise.
>So you're the type of person who holds down the left arrow key for 30 seconds to get the start of the line and then assumes bash and everyone who uses bash are the stupid ones not you. Keep doing that.
I mean, in Vim you can type 30left/30h, which is still moronic (imagine scrolling over 30 items just because the function name is 30 characters) but what else can you do in nixshit? Are you saying after something is 30 characters it's time to refactor it already? Not far from what I'd expect the UNIX braindamaged mind to think. I bet you also make a file in /bin and go through several rituals to establish a namespace for it and whatever other shit nixtards do instead of merely attaching a few unnamed atomic syntactic elements with a few keystrokes.
>That's because it's a strawman that you are projecting.
I don't think so suka.
>>10197 This is the UNIX hater thread, suka.
DAY OF THE SEAL SOON
>>10263 >imagine scrolling thirty items every time a function name is thirty characters
>imagine having no idea about movement keys
>imagine thinking that movement keys had never occurred to the people who wrote bash/readline
>imagine thinking that UNIX is the retarded one here
>I bet you also make a file in /bin and go through several rituals to establish a namespace for it and whatever other shit nixtards do instead of merely attaching a few unnamed atomic syntactic elements with a few keystrokes.
wut?
I liked the old unix haters, who had salient but misguided reasons for hating unix, and a thought out but nevertheless doomed plan for replacing it. This shit is just embarrassing.
>>9507 Came into this thread to say this.
Build something better, niggers. We desperately need a non-fucked kernel since Torvalds bent the knee to the trannies. A million fucking lines of code for a kernel is retarded anyway.
>>10276 The Linux kernel isn't the only one out there, there are other ones. They usually lack the momentum that Linux does, so they trail behind due to lack of developers. Starting from scratch is usually viewed as a waste of time unless the design is radically different or unlimited resources are provided. The number of work hours needed to get a new OS up to a point of usefulness is vast. Just look at Google Fuchsia, that project likely has whatever resources it needs and it's been three or four years in the process.
>>10276 Anyone can build a kernel. Twitter trannies do it for fun to show off their meme languages. The hard part is getting enough hardware support to avoid eternal meme status. That linux has accomplished this is precisely the reason it has a million lines of code.
Building a better unix is a waste of time. Yes, there are a myriad ways that unix is imperfect, but it is a decent foundation, and really it is the least of our woes. The web is in a disgusting state, and we need to focus on ways to fix it, not have arguments on the tier of tabs versus spaces.
>try to use it
>run curl from your software
>try to make multiple requests to same server using keep alive
>not possible, after each request curl process is closed and so are it's TCP connections
>cannot reuse the TCP connection, cannot use keep alive
>have to use library like real programmers instead of unix brain damage
>had to waste my time to learn that unix philosophy is retarded nigger shit
please be smarter than me and never go the unix way