>TL;DR - Is our glyph system efficient for computer science?
Let me elaborate. I'm taking an introductory computer science class as part of my engineering course, and a lot the material covers logic gates which leads into 2's Complement representation of negative numbers which leads to bit-registers & ALU'S which leads eventually to ASCII representation of visual characters represented with the hexadecimal.........you get my drift.
The point I'm trying to make is that the reason we seem to use hexadecimals at all is because it is an easier compromise between using only binary to represent all numbers and letters and symbols and while efficient enough to not be impossible to compute.
We only seem to *be* in this situation because we use 26 letters, 10 numbers, and 30-40 odd symbols and characters to represent term grouping, math functions, etc.
But do we actually need this many? Would a more efficient humanly readable, speakable, and teachable language make digital representation an easier process? Would compromising the number of unique symbols and glyphs mean a more complex code to represent the same thing?
Can we create a working human language and computer system with 32 unique glyphs? 10 letters, 12 symbols (a space included), 10 numbers?
>>8913915
One is sufficient if you try hard enough
I was under the impression that octal/hexadecimal are just used to make numbers more human friendly, not necessarily to relate them to ASCII.
>>8913915
What's the pic
>>8913950
Decimal is always more "human-friendly" but octal and hexadecimal are used for brevity. The character 'j' has an Ascii value of 106 = 0x6A = 0b0110'1010
You can see that the hex representation allows one to quickly deduce the binary representation with fewer glyphs, as each hexadigit corresponds to a series of four bits. 0x6 = 0b0110 and 0xA = 1010
>>8914224
looks like a weird system seven like skin for an android phone
>>8913950
Also hex/octal fit perfectly into 3 or 4 bit groupings so they efficiently and conveniently map to binary
>>8913915
Do we even need letters? We use them only when composing words. Why not just assign a unique number to each word and work at that level.
>>8913915
This isn't necessary. Raw text is cheap these days, and even then you can Huffman compress it efficiently.
>>8914514
Essentially that would be the same as Chinese or similar languages. One symbol = one word. With a large number of words you'd end up needing very long binary representations. Especially since you'd need a separate number for every grammatical variation, every language, etc.
I would say that having that little letters & symbols is already extremely efficient.
You ,might be able to reduce it a little with using literal pronunciation, but that would increase the amount of necessary letters in other places.
>>8914529
>With a large number of words you'd end up needing very long binary representations.
You could use the utf8 method to allow common words to take up fewer bytes. A ten letter word takes up 10 bytes in Ascii. The same word would take maybe 2 bytes with my method.
>>8913917
that's not one unique glyph at all