On The Limits Of The Human-scale Bases

This forum examines bases other than twelve and less than sixty.

On The Limits Of The Human-scale Bases

Double sharp
Dozens Demigod
Double sharp
Dozens Demigod
Joined: Sep 19 2015, 11:02 AM

Oct 6 2015, 10:40 AM #1

Icarus' tour defines a human-scale base as one between 7 and 16 inclusive. Now, the boundaries aren't known for sure: this older 2012 article by Icarus considers bases up to 20, while now Icarus himself doesn't appear to think 16 can plausibly be civilizational without resorting to less efficient multiplication methods like mediation/duplation. OTOH, some others (e.g. arbiteroftruth) have suggested that 6 is not too small, and is actually within the human scale.

We can do some analysis within the range from 6 to 20 inclusive. For one, a base must be even to have any chance in society. This is not just a societal thing: evenness is not important solely because of custom. An odd base is forced to have a totient ratio over half, and is bloated beyond the size of an even base with the same prime decomposition. See Icarus' thread on the subject. So I think it is safe to ignore odd bases, and so the range shrinks to {6, 8, 10, 12, 14, 16, 18, 20}.

If hexadecimal's usefulness is questionable, so surely is that of octodecimal and vigesimal. Furthermore, following on Icarus' multiplication-table argument on why hexadecimal would take too much time, I am unsure if even tetradecimal is workable. To quote a post I made, following Icarus' argument about hexadecimal:
Double sharp @ Oct 6 2015, 10:14 AM wrote:
icarus @ Sep 29 2015, 03:01 PM wrote: The "mountains" are an interesting high country to camp. We can see for miles up there. But it is harsh and not for the faint of heart.

Ultimately we'd fare better building our city in the valley, between octal and around tetradecimal.
I am not even sure if tetradecimal is human-scale, come to think of it. Its multiplication table is already twice as big: decimal has 55 unique facts (78 if you use a 12-by-12 table), while tetradecimal has 105. So at a minimum, presuming every fact is equally easy, tetradecimal would take at least 4/3, and at most twice, as much class time.

But not all facts are equally easy. With two opaque totatives in tetradecimal {9, b} (as opposed to one, {7}, in decimal), and with the alpha dominance (more difficult than decimal's omega dominance), and the longer digit-sequences to memorize for divisors like {2}, I think tetradecimal would still be more difficult on average, even if its tables were the same size as those of decimal.

Combining these two factors, I would estimate that tetradecimal is between 3/2 and 9/4 times as difficult as decimal (assuming that the tetradecimal tables are about 9/8 times as difficult intrinsically, not counting their size), and the acquisition of multiplication would take that much more time. On average, we'll be spending a full year more on average to acquire multiplication. It's not as bad as hexadecimal, but we'll still be behind using tetradecimal, with only the greater concision to comfort us with.

So I am not sure if even tetradecimal can be civilizational. Since it seems that 2 is very fundamental, it may be that only {(6), 8, 10, 12} are possible civilizational bases (and I'm not even sure about senary), in which case the ordering is probably {12, (6), 10, 8} or {(6), 12, 10, 8} (I'm not really sure which it is).
So maybe the upper limit is actually duodecimal. One of Icarus' old threads suggests that "we maximize our human computational capability when we use base twelve" (as it is the largest usable base), and that enlarging the base past twelve makes it more difficult to use. This gives the most easily memorized bases as {8, 10, 12}, with anything lower being trivially small.

The lower bound is a little more difficult to quantify...

P.S. It appears we can sort of assign the ideas of these four bases {6, 8, 10, 12} like so:

{6}, senary: pseudo-7-smoothness (or maybe the idea is pseudo-5-smoothness and 7 is a lucky afterthought), but 3 is more important than 5
{8}, octal: 2-smoothness (pure binary thinking)
{10}, decimal: pseudo-5-smoothness, but 3 can be sacrificed a little for 5
{12}, duodecimal: 3-smoothness
Quote
Like
Share

wendy.krieger
Dozens Demigod
wendy.krieger
Dozens Demigod
Joined: Jul 11 2012, 09:19 AM

Oct 6 2015, 12:05 PM #2

The size of the base in part depends on the means of calculation. The assessment is based on learning tables, which increases with the square.

However, there are other means, and the chinese and japanese abacusses attest that the visual limit around five, and setting the heaven-row and earth-row to numbers like 4/5 or 3/6 would bring 20 and 18 well within the realm of human calculation.

One notes that the mayans used base 20 extensively, including a date-count involving an 18-column (similar to us saying 116 for an hour and 16 minutes).

If one can suppose a heaven/eath row with different numbers, say against the russian abacus, which only had rows with 9 beads on it, then a heaven/earth base of similar vein would also be in the human scale.

A base like 120 with AA is then not that much different in means than the eastern abacusses partition of 10 into twice five, or the common partition of 20 into four fives. It brings 10 and 20 into instantly visibe sizes, and 120 + AA is managable with 12*12 tables.
Twelfty is 120 dec, as 12 decades. V is teen, the '10' digit, E is elef, the '11' digit. A place is occupied by two staves (digits).
Digits group into 2's and 4's, and . , are comma points, : is the radix.
Numbers writen with a single point, in twelfty, like 5.3, means 5 dozen and 3. It is common to push 63 into 5.3 and viki verka.
Exponents (in dec): E = 10^x, Dx=12^x, H=120^x, regardless of base the numbers are in.
Quote
Like
Share

Double sharp
Dozens Demigod
Double sharp
Dozens Demigod
Joined: Sep 19 2015, 11:02 AM

Oct 6 2015, 01:22 PM #3

wendy.krieger @ Oct 6 2015, 12:05 PM wrote: The size of the base in part depends on the means of calculation.  The assessment is based on learning tables, which increases with the square. 

However, there are other means, and the chinese and japanese abacusses attest that the visual limit around five, and setting the heaven-row and earth-row to numbers like 4/5 or 3/6 would bring 20 and 18 well within the realm of human calculation.

One notes that the mayans used base 20 extensively, including a date-count involving an 18-column (similar to us saying 116 for an hour and 16 minutes). 

If one can suppose a heaven/eath row with different numbers, say against the russian abacus, which only had rows with 9 beads on it, then a heaven/earth base of similar vein would also be in the human scale. 

A base like 120 with AA is then not that much different in means than the eastern abacusses partition of 10 into twice five, or the common partition of 20 into four fives.  It brings 10 and 20 into instantly visibe sizes, and 120 + AA is managable with 12*12 tables.
I'm not considering bases like {3:6}, {4:5}, {6:10} and {12:10} for these purposes, i.e. "human-scale" bases in society. They are unstable in the long-term, the evidence being the decay of the Babylonian sexagesimal to pure decimal. One can use them for all one wants (and I've been rather taken by {6:10}, and recently also {12:10} somewhat), but it won't change the fact that many will find it too confusing. I agree that they are nice personal-use bases.

The Mayans did indeed get away with using vigesimal for a long time. The question is, did they use a multiplication table and our current algorithms? (The Babylonians at least had an abbreviated one.) Did everyone have to use large-ish numbers in the (decimal) hundreds, like today?
Quote
Like
Share

Double sharp
Dozens Demigod
Double sharp
Dozens Demigod
Joined: Sep 19 2015, 11:02 AM

Oct 6 2015, 02:00 PM #4

Double sharp @ Oct 6 2015, 10:40 AM wrote: The lower bound is a little more difficult to quantify...
Number of Steps Needed to Multiply
We do have a sort of guide to work with. Mediation/duplation is commonly considered a rather inefficient multiplication strategy, yet it is exactly the same as binary multiplication in essence, breaking numbers into parts based on their binary expansions. (Hence it would work well with hexadecimal.) So perhaps this shows that, while the binary tables are trivial, they are too small and don't cover enough commonly-occurring small products, and there is so much splitting that in the end it takes more time than carrying out a straight multiplication in octal, decimal or duodecimal. So this shows that binary is too small for everyday purposes.

We would then have to measure how many steps on average it takes to multiply two numbers, and how efficient each step is. Some knowledge can be gleaned from the concision of each base, but this is not everything: senary will require one more place than decimal for most commonly encountered numbers, but each operation is easier, so this may cancel out.

Since odd bases need not be considered, the remaining case in question is quaternary, whose concision is 167% (heh, but it's recognisable immediately) that of decimal. Hence we'll require two more places. But we've already seen that anything below {8} has a trivially small multiplication table, so the quaternary and senary tables would be essentially the same in memory. So if the trade-off is worth it in senary, it can't be worth it in quaternary.

Taking odd bases out of consideration, this measure would cap the lower limit at senary, base 6.

Compaction: Does a Power of a Base Feel Distinct from the Original?
There is also a common feeling that binary is too small, that it needs to be compacted into something like octal or hexadecimal. If we use something like base 100 (centesimal), it seems to break down immediately into two decimal places. Yet octal, nonary, and hexadecimal feel like distinct bases, with no urge to turn into binary or ternary.

Where is the borderline? The fact that quaternary is not used very much may indicate that it is little better than binary itself as a compaction, and itself gets compactified into its square, hexadecimal (though that's arguably too large for general arithmetic). OTOH, base 25 (pentavigesimal) appears to large to fit in memory, being beyond even hexadecimal, so perhaps quinary is the first base that would stand by itself.

Taking odd bases out of consideration, this measure would cap the lower limit at senary, base 6.

How To Prevent Digits From Being Duotonous
Binary is, for sure, the worst offender, as we get too easily lost in endless streams of 1's and 0's. But how low can we make the base before this becomes a problem?

Well, one way to do it is to see how likely it is that we will get distinct digits for each place...

Senary (base 6)
1 place: 100% (of course)
2 places: We can put down any digit in the sixes place, and all but one of the remaining choices for the ones place won't coincide with it, so it's (6/6)(5/6) = 83%.
3 places: (6/6)(5/6)(4/6) = 56%
4 places: (6/6)(5/6)(4/6)(3/6) = 28%
5 places: (6/6)(5/6)(4/6)(3/6)(2/6) = 9%
6 places: (6/6)(5/6)(4/6)(3/6)(2/6)(1/6) = 2%

So the halfway point occurs around just over three digits, so we'll have three digits on average before a digit repeats. One repetition is generally OK, so we'll have four places, that is up to decimal 1296, which doesn't seem so bad - the halfway point for no repeated decimal digits is four places (so 1296 is an order of magnitude lower), and with one repetition allowed we'll get to five.

OTOH, with quaternary (base 4) the problem is graver:
1 place: 100%
2 places: (4/4)(3/4) = 75%
3 places: (4/4)(3/4)(2/4) = 37.5% <-- oh dear
4 places: (4/4)(3/4)(2/4)(1/4) = 9.375%

Interpolating, the break-point is somewhere between two and three places, which will get us to between 16 and 64. Adding another place to allow for one repetition only gets us to under 256, which seems very small.

We need not even consider binary, for which the halfway point is two places (up to 4), and for numbers from 4 upwards there will always be a repeated digit.

While these statistics are interesting, the problem is that I don't know how to quantify how many digit-combinations are enough. I can only say that senary doesn't seem obviously too small, as quaternary does, and so it is pretty useful that we have evidence that {2} and {4} are not usable, because then I can safely say the lower limit may include {6} (as we have evidence that {6} may be OK, while {2, 4} are certainly not).

I guess I will be considering (at least for now) {6, 8, 10, 12} as the four bases that could be used in general society.
Quote
Like
Share

icarus
Dozens Demigod
icarus
Dozens Demigod
Joined: Apr 11 2006, 12:29 PM

Oct 6 2015, 08:11 PM #5

Double Sharp, I've become more conservative with time. If we attempted to maintain our current algorithms and "work schedules", meaning we want to finish with learning arithmetic in just a couple years, then the scale seems to be {8, 10, 12} maybe 14. To be on the "benefit of doubt" side, then I would include 14.

Bases above 14 suffer from a "learning horizon" or the number of years it takes society to educate young children to use the base.

Senary seems to be disqualified not because of the learning "horizon", but the "word-length" problem which is one of human cognitive limitations in implementation of the base in not necessarily mathematical ways. Indeed, "word-length" becomes an increasingly greater issue with the reduction of the number of numerals (digits), tending toward monotony as we reduce the base, and the sheer length of the strings of digits in a fairly small numeral, like people's ages, quorums, and shoe sizes. Thus senary and below seem to suffer from a different cognitive limit, that of ability to memorize the commonest numbers (small ones) and have a sufficient number of combinations for other non-mathematical purposes (locker combinations, room numbers, etc.)

Some don't buy my issues with senary and that is fine. It's not "settled science" and because mathematics does not apply to and disqualification, I can't say for sure I am right. I just really really think so. We could see a senary culture evolve quite fine and not worry about the fact their locks don't have h[4344] combos in three bezels, or they run out of room numbers after just h[100] rooms (versus h[244]). But they *might* discover using two sixes works quite nicely. I am not sure about the long-term stability of senary. Maybe it would "anti-decay" to dozenal. Some "reformers" get together in the h[11500]'s and begin to cobble together arithmetic and mensuration reforms. It's not that senary is impossible to use, just less convenient than its relative, the dozen.

If senary is *not* disqualified, then it does suffer from "digit monotony" and "word-length" to a greater extent than its double, duodecimal. It is hobbled. If your army can't discern between 1000 and 10000 rounds or troops or whatever, then the other army wins and senary disappears from the face of the earth. Perhaps we'd see base amplification a lot to bring them into range. We amplify these little bases to get to a nice-sized base. Quinary could be considered a "building block" of decimal. It's handy (pun intended) that we have two hands. This inference really was the kicker. I wonder if elephants would use decimal? Octopuses? Hmm.

Now the "learning horizon" is a factor in _today's_ world. It's not a factor, say, in ancient times or the middle ages. We did not mass-educate people to be self sufficient thinkers (we aren't doing that today, now that public schools are politically-correct social-justice indoctrination camps, but I digress). I could see base 16 as civilizational in that mediation/duplation and bitwise numerals would have been retained for longer than we did in our current society. Assuming society is "progressive" (it isn't) all the hex civ would need to "hold out" for is the invention of electronic computers. Then the machines would've handled arithmetic more handily and maybe you have a "modern" hex civ. So I think if the "right" hexadecimal developed (mediation/duplation - we had that, bitwise digits - we didn't but they could have developed), then maybe a modern, industrial, postindustrial, politically correct and left-leaning hexadecimal could be in effect in today's screwed up world.

Tetradecimal is not as keen a bet. There isn't a keen way to fold it. However, maybe we just live with the longer drills in class. It would stink to be a kid, but hey, we teach 'em, and then they're out in the field cutting tube stock to "7.1a7" inches, "c" inches to a foot, because "10" is just a little long for 'dem toids'. We had vigesimal and sexagesimal civilizations via 4:5 and 6:10 thus we assume these "mixed-radix" civs are indeed possible. In pre-modern civilizations we could expect to see bases {2 - 10, 12, (14), 16}, 4:5, 6:10, maybe 12:10. It's possible to get weird things like 4:7 or 4:6. I don't know of a human civ having 4:7, but 7 has been a "special" number for many cultures (maybe because of its "stubbornness" or "unattainability") that it might be possible to get a young civilization to use it for taxes, accounting, and censuses. Of {2, 3, 4, 5, 6, 7, 8, 9, 10, 12, (14), 16, 4:5, 6:10, 12:10}, it seems {8, 10, 12} are the stablest, with 10 inherent biologically, 12 the pointy-headed intellectual choice, and 8 the "gee I can cut things in half 'forever'" choice. Of course by choice, I mean crowd-mentality choice usually, thus we'd get {(8), 10}, or "we know better than you ergo we make the rules" choice, then maybe we might get 12.

The problem with many of the "off" bases is that they might be sacrificed for one of the easier-to-use aforementioned series, or they would "decay" into one or the other "staff" or sub-base. Of {8, 10, 12}, it seems that 10 is the base of choice in a "collapse" when partially-literate people are left to try to conduct business, counting what's handy (pun intended, of course). Committees would be needed to recognize the merits of high divisibility it would seem. I think the only way we'd end up with the best base (dozenal) would be if Romans recognized that uncia work pretty well below, why not above the radix point? Again, requires enlightenment (like democracy. Begin inscribing your dozenal arithmetic tables on stone, folks. When we're fumbling about in Braille after the west collapses, we want people to find "the right answer" next time. I know. Dark. Pun intended.)

It's interesting to note that the "vigesimal civilizations" hadn't really collapsed. They were interrupted by disease and conquest. It would've been interesting to see when they might've dropped base twenty for something more sane (our viewpoint). Better yet, maybe they might've devised a handy innovation (pun intended) toward arithmetic that we haven't discovered. (I don't know what that would've been. What we're doing is "memoizing" by memorizing all the combos in the table. Mediation/duplation leverages bitwise computation. CDM, or its manifestation in Babylonian times, is akin to memoization. Wendy likes to cite counting boards and abaci, which are indeed valid but require a device, meaning you have to get a device, implying trade. When trade disappears and society changes drastically, the base is subject to collapse.)

Pun always intended, come on. Some of my comments are tongue in cheek, settle down. I don't want civilization to collapse so we can have a dozenal civilization. I do want it because then I get to use my bunker for real. ; ) (I don't have a bunker come on I live in a major metro where would I hide that? I'd be among the first to croak.)
Quote
Like
Share

Double sharp
Dozens Demigod
Double sharp
Dozens Demigod
Joined: Sep 19 2015, 11:02 AM

Oct 7 2015, 06:05 AM #6

To be honest, if I were to be give things the "benefit of [the] doubt", I would go all the way up to hexadecimal, as it's so easy to find Internet hexadecimalists. It would then help to prove that not only is {16} too large for modern society as it is today, but also that once you get past the repeated halving things go wrong. The alternative-universe ideas are really cool, but I get the feeling that most advocates are looking for a base that could replace decimal in the real world today. Tetradecimal or hexadecimal may require some earlier divergence to have worked, and may still have been slower. (Although the lack of a good way to "fold" tetradecimal would seem to imply that the guess regarding the multiplication table size is right, and that at a minimum we'll be taking half a year more to learn multiplication.)

The point about senary being half of another usable base (duodecimal) is rather convincing. When using vigesimal, I keep on noticing that for small numbers it's very easy to convert back and forth between decimal and its double-sized relative vigesimal. Now of course vigesimal is probably too big now, but this creates an issue with senary with duodecimal is twice as big and is almost as easy. So there may be a pressure to double the base, like the attested doublings from quinary to decimal.

OTOH, there are some languages today that are senary, with 36 as the higher grouping of sixes (and then 216, 1296, 7776): see Chapter 6 of Hammarström's Unsupervised Learning of Morphology and the Languages of the World. This is striking as all known quinary languages use 10 or 20 as the higher grouping of fives. So while there has been lots of pressure to double the base from quinary, it being odd, it appears that there has not been such pressure for senary. Perhaps 36 is thought to be a convenient larger grouping?

It's also interesting that many senary languages have made words for high numbers, with Kómnzo going up to 66 = 46 656! And in contrast to our decimal system, there's a very distinct word for each power, making it easier to distinguish between h[1000] and h[10 000] even under stress. This is very interesting. Maybe senary is really very helpful in letting people visualize large numbers, like arbiteroftruth mentioned?

The numeral systems that have actually shown up in the world appear to be {2, 3, 4, 6, 8, 10, 12, 15, 2:5, 4:5 or 20, 4:10, 6:10, 8:10, 12:10}, according to Hammarström's book (and adding {2, 10, 12:10}, as he's only covering rarities). I am kind of cheating by including {4:10} and {12:10} as those have both since turned into pure decimal. {5} has appeared, but only as an innovation by a few speakers of {4:5} languages. There have been isolated mistaken reports of {11} (thank goodness), and one of {30} (that would have been massively interesting if it were true). Discounting {2, 3, 4} out of hand as too small, and the mixed radices for being unstable, it appears that we have {6, 8, 10, 12, 15(!)}. There is a total lack of factors of seven that make me suspect that even though religious forces could keep a seven-day week cycle in place, it just wouldn't be accepted as a base of counting. Pentadecimal (as used in the Huli language) would seem to be ruled out as it is odd, so I predict that this system will eventually decay, I don't know what to.

So maybe {(6), 8, 10, 12, (14), (16)} ought to be the range considered, with {(6), 8, 10, 12} being the stablest. (I do think {6} may well be too small, but I think it is good enough that the inertia against doubling would be greater for {6} than it is for {5}.)

My personal suspicion is that the vigesimal system would have eventually decayed to decimal, with vestigial groupings into twenties. So there would have been decades like "ten, one score, one score and ten, two score, two score and ten, three score, three score and ten, four score, four score and ten, a hundred", and then the decimal hundred would take over (no more score of scores). Even later (after more decimal pressure) much of this system may decay, perhaps with remnants persisting in the higher range (so French still goes vigesimal between 60 and 99), or in the urge to have single words for all numbers up to 20 (hence the -teens in English).
Quote
Like
Share

icarus
Dozens Demigod
icarus
Dozens Demigod
Joined: Apr 11 2006, 12:29 PM

Oct 7 2015, 11:48 AM #7

This is a rational assessment, coupled by what we can know from history.

My approach had been to include interposing odd/prime bases in the range because of the magnitude of effort being similar to the evens. Since I tried to include hexadecimal owing to the recent popularity of the base, pentadecimal gets thrown in.

The examination of properties that we've concluded in the Tour des Bases, especially the learning horizon, seems to whittle it down to tetradecimal at maximum.

{(6), 8, 10, 12, (14), (16)} thus seems rational in the light of what we've considered. In a way these are more like chemistry's "island of stability" concept for transuranic elements. {8, 10, 12} are seen as "stable," {6, 14, 16} less so ("radioactive"), {7, 9, 11, 13, 15} possible but much less so ("millisecond half life"), so as to be considered practically on par with bases outside of the range like 3 and 21 (theoretically highly unstable, like 3:1 ratio of protons to neutrons, sub-microsecond half life).

And of these, the optimum is dozenal, more utility for the investment of learning arithmetic. Look at it as a sort of ROI. Decimal is biologically inherent. Octal is an option that could very well arise.

It is interesting that quinary doesn't run to the square as the next rank, but doubles. This is a sign of a lack of confidence in grouping by 5. A grouping by 6 is extensible because that number in turn can be broken down but 5 cannot. six sixes is easier to comprehend because we can take steps to get there, whereas five fives needs to happen in a whole step.

In some ways we can regard senary is a sort of intermediate-step base compared to the latter two solid members of "human scale". 6 - 36 - 216 - 1296 sits quite nicely within the gait of the two other bases.

One question to think of. Could decimal be "chosen" (or arise) outside of a biologically inherent motivation? Two hands of five fingers? If so we should consider that as another reason for decimal as well. I can't think of anything, how 2 and 5 appear necessarily naturally linked. It seems plain that many of the decimal cultures arrived there by doubling quinary rather than producing the range ten straightforwardly.
Quote
Like
Share

Double sharp
Dozens Demigod
Double sharp
Dozens Demigod
Joined: Sep 19 2015, 11:02 AM

Oct 7 2015, 02:42 PM #8

icarus @ Oct 7 2015, 11:48 AM wrote:This is a rational assessment, coupled by what we can know from history.

My approach had been to include interposing odd/prime bases in the range because of the magnitude of effort being similar to the evens. Since I tried to include hexadecimal owing to the recent popularity of the base, pentadecimal gets thrown in.
I doubt the magnitude of effort required by odd bases is really that similar to the even ones. I mean, any base below octal seems to be trivial to memorize; nonary is composite and so gets great help; but I can't see undecimal or tridecimal being as easy as even tetradecimal. I have tried writing out the tables for all the human-scale bases (up to 16) and {11, 13} ended up as a sea of corrections upon corrections. This may partially be due to lack of use, but even {15} wasn't this hard. Maybe {14, 15, 16} are approximately matched: in particular {14, 15} are both semiprimes and are of nearly the same magnitude, and they complement each other.

Nevertheless I have a hard time seeing {15} remain in force in society, with evenness being so important. Perhaps it would be easier for {9, 15} than the other odd prime bases, but I find myself very doubtful. (A pity, because I rather like how well {9, 15} do in spite of their oddness.)

The human-scale bases seem to be among the most interesting ones, not behaving like most large bases, as they are among the smallest twenty or so integers. They are approaching the edge at {2} and so have some exceptional properties, like extraordinarily low opacities (e.g. 90% for decimal, 86.667% - oh dear - for pentadecimal), and having intuitive divisibility tests for almost all digits.
icarus @ Oct 7 2015, 11:48 AM wrote:The examination of properties that we've concluded in the Tour des Bases, especially the learning horizon, seems to whittle it down to tetradecimal at maximum.

{(6), 8, 10, 12, (14), (16)} thus seems rational in the light of what we've considered. In a way these are more like chemistry's "island of stability" concept for transuranic elements. {8, 10, 12} are seen as "stable," {6, 14, 16} less so ("radioactive"), {7, 9, 11, 13, 15} possible but much less so ("millisecond half life"), so as to be considered practically on par with bases outside of the range like 3 and 21 (theoretically highly unstable, like 3:1 ratio of protons to neutrons, sub-microsecond half life).
That's a nice metaphor, especially since odd atomic numbers are indeed so much less stable than even ones in the island of stability around the early actinides! Here's a list of longest-lived isotopes from atomic number 84 onwards, in case it inspires any more metaphors. (Bismuth, 83, is unstable, but with a half-life of quintillions of years I don't think it really matters.)

Polonium (84) - Po-209, 130 years
Astatine (85) - At-210, 8 hours
Radon (86) - Rn-222, 3.824 years
Francium (87) - Fr-223, 22.0 min
Radium (88) - Ra-226, 1600 years
Actinium (89) - Ac-227, 21.77 years
Thorium (90) - Th-232, 14.1 billion years
Protactinium (91) - Pa-231, 32800 years
Uranium (92) - U-238, 4.47 billion years
Neptunium (93) - Np-237, 2.14 million years
Plutonium (94) - Pu-244, 80 million years
Americium (95) - Am-243, 7400 years
Curium (96) - Cm-247, 16 million years
Berkelium (97) - Bk-247, 1000 years
Californium (98) - Cf-251, 900 years
Einsteinium (99) - Es-252, 470 days
Fermium (100) - Fm-257, 100.5 days
Mendelevium (101) - Md-258, 51.5 days
Nobelium (102) - No-259, 58 minutes
Lawrencium (103) - Lr-266, 11 hours
Rutherfordium (104) - Rf-267, 1.3 hours
Dubnium (105) - Db-268, 31 hours
Seaborgium (106) - Sg-269, 3.1 minutes
Bohrium (107) - Bh-270, 3.8 minutes
Hassium (108) - Hs-269, 27 seconds
Meitnerium (109) - Mt-278, 7.6 seconds
Darmstadtium (110) - Ds-281, 9.6 seconds
Roentgenium (111) - Rg-282, 2.1 minutes
Copernicium (112) - Cn-285, 29 seconds
Ununtrium (113) - Uut-286, 19.6 seconds
Flerovium (114) - Fl-289, 2.6 seconds
Ununpentium (115) - Uup-289, 220 milliseconds
Livermorium (116) - Lv-293, 53 milliseconds
Ununseptium (117) - Uus-294, 51 milliseconds
Ununoctium (118) - Uuo-294, 890 microseconds

(NB: Many of these near the end are the heaviest or second-heaviest known isotopes, which are still pretty neutron-deficient, so this will probably change in the near future.)

I would then say that {8, 10, 12} are like Th, U, and Pu (not in any particular order, but these are the only three of these elements that are long-lived enough to have survived since the formation of the Solar System), {6, 14, 16} are like Ra, Cm, and Cf (of which Ra is still around on Earth, thanks to its progenitors Th, U, and Pu; the latter two are not), {9, 15} are like Np and Bk (Bk is completely gone; Np almost is), and {5, 7, 11, 13} are like Ac, Pa, Am, Es (the first two are trace radioactives, while the latter two do not exist in nature anymore).
icarus @ Oct 7 2015, 11:48 AM wrote:And of these, the optimum is dozenal, more utility for the investment of learning arithmetic. Look at it as a sort of ROI. Decimal is biologically inherent. Octal is an option that could very well arise.
So that's only considering the "stable" candidates {8, 10, 12}, right? (Maybe we could call the next tier {6, 14, 16} "metastable"...)
icarus @ Oct 7 2015, 11:48 AM wrote:It is interesting that quinary doesn't run to the square as the next rank, but doubles. This is a sign of a lack of confidence in grouping by 5. A grouping by 6 is extensible because that number in turn can be broken down but 5 cannot. six sixes is easier to comprehend because we can take steps to get there, whereas five fives needs to happen in a whole step.

In some ways we can regard senary is a sort of intermediate-step base compared to the latter two solid members of "human scale". 6 - 36 - 216 - 1296 sits quite nicely within the gait of the two other bases.
So I think I'll be using the range {(6), 8, 10, 12, (14), (16)} pretty consistently from now on. 6 has high divisibility that should help, 14 is a borderline case, and 16 is so often considered.

Thank you for providing an explanation for this tendency (that quinary would double instead of squaring, while senary could square)! For sure, it's much better than my incomplete one (that we would double quinary because evenness is so useful, but senary is already even). (^_-)-☆
icarus @ Oct 7 2015, 11:48 AM wrote:One question to think of. Could decimal be "chosen" (or arise) outside of a biologically inherent motivation? Two hands of five fingers? If so we should consider that as another reason for decimal as well. I can't think of anything, how 2 and 5 appear necessarily naturally linked. It seems plain that many of the decimal cultures arrived there by doubling quinary rather than producing the range ten straightforwardly.
I dunno. If we were all hexadactyls, we'd probably have a few cultures using decimal just like we have a few cultures using octal in this universe (counting the gaps between the fingers). But that's still tied to biology.

Maybe 5 is small enough that it would serve as a weird but usable prime like 7, so 5 would have become a sacred number if it hadn't already been secularized and robbed of its mystery by appearing on everyone's fingers. Then we might get an alternative route to decimal, just like how you suggest 7 could have been possible. I'm not quite convinced by this because 7 and 14 do not appear to have become bases in any human society, but maybe the smaller magnitude of 5 and 10 may have pushed it over from "unattested" to "attested". Even so, this would still be creating 10 from 2 * 5.

If not, you mentioned that octal could possibly be a crowd-mentality choice (I wonder why though?). Maybe we would get that, in the absence of fingers as the deciding factor.
Quote
Like
Share

Double sharp
Dozens Demigod
Double sharp
Dozens Demigod
Joined: Sep 19 2015, 11:02 AM

Oct 7 2015, 03:19 PM #9

icarus @ Oct 7 2015, 11:48 AM wrote: A grouping by 6 is extensible because that number in turn can be broken down but 5 cannot. six sixes is easier to comprehend because we can take steps to get there, whereas five fives needs to happen in a whole step.
I have an example now! (^.^)/~~~

Ndom is a senary language, but sixes group into 18s (three sixes), and 18s then group into 36s (six sixes). This is in effect taking a step along the way at 6 * 3 (half a senary "hundred") to get from 6 to 62. A number like 100d is spoken "(36 times 2) and 18 and 6 and 4", making 244h (subdividing the sixes digit into 3+1).

P.S. if you go to that site's home page you'll find it ranks number systems by complexity. I'm not so sure about the ranking, though - I wouldn't claim Huli is the most complicated numeral system in the world. Huli may be pentadecimal (what a choice!), but it is consistently so, unlike French which has single words up to 16 (uh? trying for hexadecimal?), goes decimal until 69, then tries out sexagesimal for a decade before going to vigesimal at 80, until decimality is restored at 100.

Maybe the idea is to rank complexity in how the system relates to the way we write numbers, which is pretty universally decimal: so in that respect it is true that Huli's use of pentadecimal makes it hard to transcribe a number like 87, as you have to think to realize it's pentadecimal F[5C]. The use of vigesimal in languages like French then becomes the nicest possible non-decimal base, while Hindi's rather opaque compounding of decade and unit roots to make a centesimal system is easily decimal-compatible but still somewhat hard for an outsider to learn, so "complex" in a sense.
Quote
Like
Share

Double sharp
Dozens Demigod
Double sharp
Dozens Demigod
Joined: Sep 19 2015, 11:02 AM

Oct 9 2015, 07:02 AM #10

[b]The Opacity Criterion[/b] There is yet another simple criterion that we can use to determine the upper end of the human scale. We simply find the number of opaque totatives and see how many products are in their lines in the multiplication table. (This is perhaps being generous; there are finer shades, as the decimal 3 times table is harder to remember than the 9 times table, despite both being omega-related totatives.) The bases {2, 3, 4, 5, 6} have no opaque totatives. I will consider all bases up to {30}, and list the number of difficult multiplication facts they have. I got the list of opaque totatives from Icarus' digit maps, [url=http://z13.invisionfree.com/DozensOnline/index.php?showtopic=451&view=findpost&p=4133157]one of which[/url] I will quote here: [z] [quote] [/quote] [d] Binary: 0 facts Ternary: 0 facts Quaternary: 0 facts Quinary: 0 facts Senary: 0 facts Septenary: {5}, 7 facts Octal: {5}, 8 facts Nonary: {7}, 9 facts Decimal: {7}, 10 facts Undecimal: {7, 8, 9}, 33 facts Duodecimal: {5, 7}, 24 facts Tridecimal: {5, 8, 9, a, b}, 65 facts Tetradecimal: {9, b}, 28 facts Pentadecimal: {b, d}, 30 facts Hexadecimal: {7, 9, b, d}, 64 facts Heptadecimal: {5, 7, a, b, c, d, e, f}, 136 facts Octodecimal: {5, 7, b, d}, 72 facts Enneadecimal: {7, 8, b, c, d, e, f, g, h}, 171 facts Vigesimal: {9, b, d, h}, 80 facts Unvigesimal: {8, d, g, h, j}, 105 facts Duovigesimal: {5, 9, d, f, h, j}, 132 facts Trivigesimal: {5, 7, 9, a, d, e, f, g, h, i, j, k, l}, 299 facts Tetravigesimal: {7, b, d, h, j}, 120 facts Pentavigesimal: {7, 9, b, e, g, h, i, j, l, m, n}, 275 facts Hexavigesimal: {7, b, f, h, j, l, n}, 182 facts Heptavigesimal: {5, 8, a, b, g, h, j, k, m, n, p}, 297 facts Octovigesimal: {5, b, d, f, h, j, n, p}, 224 facts Enneavigesimal: {8, 9, b, c, d, g, h, i, j, k, l, m, n, o, p, q, r}, 493 facts Trigesimal: {7, b, d, h, j, n}, 180 facts There are then various clusters apparent (rounding decimally): Around 0: {2, 3, 4, 5, 6} Around 10: {7, 8, 9, 10} Around 30: {11, 12, 14, 15} Around 70: {13, 16, 18, 20} [for comparison: the full duodecimal table has 78 unique facts, and the full decimal table has 100 non-unique facts] Around 120: {17, 21, 22, 24} ... I think this shows that any base besides {2-16, 18, 20} ends up having more difficult facts to memorize than the [i]whole[/i] of the decimal table. The bases {13, 16, 18, 20} also go way too close to that figure for comfort, and they show up around twice as difficult as the previous cluster at {11, 12, 14, 15}. This seems to suggest that the upper bound is really 15, and that 13 is furthermore excluded. Including hexadecimal for the benefit of the doubt would also force the inclusion of the comparable {13, 18, 20}.
Quote
Like
Share

Double sharp
Dozens Demigod
Double sharp
Dozens Demigod
Joined: Sep 19 2015, 11:02 AM

Oct 9 2015, 07:30 AM #11

The Opacity Criterion, Continued

However, we are perhaps oversimplifying the issue by looking at only the most difficult facts: the totative product lines.

Let's take duodecimal (base 12) and tetradecimal (base 14). It appears from the list above that they would be about matched in difficulty, as they have around the same number of difficult facts to memorize. But base 12 gains an advantage. Why is this so? It is because duodecimal has four factors {2, 3, 4, 6} that have very easily constructed product lines. In tetradecimal, there are only two {2, 7}, and the other duodecimal divisors are either more difficult semidivisors {4}, semitotatives {6}, or alpha totatives {3}. Furthermore, tetradecimal is larger and so its patterns are longer and more diluted than its semiprime cousin decimal.

Therefore, we must not rely on any one figure alone, and must instead consider the base as a whole.

So far, I see no reason to amend my current stand that {14} is borderline territory, and that {16} would only be usable by treating it as binary compression. {14, 18, 20} are not so lucky to benefit from this, as they are not prime powers like {16} is.

Dilution

The factor of dilution is an important one. When you double a base, you double its digit span, so twice as many digits will have to have memorized tables. These tables will in turn be longer, and many of them will be opaque totatives, as the factors will usually miss some digits in the upper half of the base (as the largest proper divisor can't be more than half the base). Here strongly composite numbers have an advantage as they will have regulars filling up the gap, like duodecimal does with {8, 9} counterbalancing {7, a} (treating omega as trivial). But tetravigesimal has only {g, i} counterbalancing all the other semitotatives and totatives here, and even trigesimal has only {g, i, k, o, p, r} counterbalancing {h, j, l, m, n, q, s} - still not quite parity, and both sets are now too large. Similarly octal has only {5, 6} between half the base and omega, while hexadecimal chokes here on {9, a, b, c, d, e}, four of which are opaque.

We thus see that increasing the size of the base beyond a certain limit seriously harms it, as it admits more difficult digits.

Criteria for Choosing a Base for Human Society's General Usage

We can thus come out with some guidelines for choosing an efficient base for society as a whole, based on this. (I'm ignoring mixed radices, due to their instability, which I've mentioned many times.)

1. Always choose an even base, to keep the totient ratio no more than half, and to make sure the commonnest fractions (halves and quarters) are resolved exactly.
2. Do not skip a prime factor unless you have a really, really good reason to do so (i.e. you have a relationship with it that is almost as good as a divisor relationship, which would be making it an omega-related Wieferich prime to your base). The only example of this I would currently think to be possibly really worth it would be decimal, although the case of tetradecimal getting transparency for the first four primes may be worth considering as well. Otherwise, you are diluting your base for no extra benefit.
3. When choosing to re-emphasize prime factors (e.g. the reinforcement of {2} in duodecimal from senary), always emphasize the lowest prime factor (which had better be 2, per guideline 1). We can see the increased dilution of octodecimal over duodecimal as an example.
4. Do not allow the size of the base to expand beyond {16}, for then you will be spending years just to learn your tables.
5. If you choose {16}, you must reinvent the numerals to show their binary origins (*), and use binary multiplication/division algorithms, treating {16} as a compression of binary and not a base in itself to use Stevin's algorithms on. That will almost certainly be too inefficient.
6. Do not choose a base that has a power within the human scale, for then you will just end up bundling places together. This blocks out all bases below 5.
7. If a base has a multiple below the uncertain {16} (i.e. it's in the set {5, 6, 7}), and is not already very composite, re-emphasizing a prime factor may enter it into the sure human scale. See guideline 3 on how to do this.

These criteria eliminate all bases except {(6), 8, 10, 12, (14), (16)}. (6 if you think it is divisible enough for convenience in society despite its small scale, 14 depending on how you define "really, really, good reason" in guideline 2; 16 because it's a prime power and so gets a concession, as we can use the simpler binary arithmetic for it instead of pure hexadecimal arithmetic.)

Once again, we have managed a two-tier classification with {8, 10, 12} in the first tier and {6, 14, 16} in the second, this time taking into account multiple factors. Their prime decompositions are:

6: {2, 3}
8: {2, 2, 2}
10: {2, 5}
12: {2, 2, 3}
14: {2, 7}
16: {2, 2, 2, 2}

I include {16} and not mixed radices like {6:10}, though both seem to involve a similar drop in efficiency (through forcing a rejection of Stevin's algorithms), as {16} is not as confusing. Each hexadecimal digit means the same thing, no matter where it is; while in 6:10 sexagesimal "5" is halfway in a units place, but almost exhausts a tens place.

It appears that {16} is on the borderline between wanting to bundle places ({2, 3, (4)} to {8, 9, (16)}), and wanting to split places ({36, 64} to {6, 8}). Hexadecimal gets on my list, instead of its square root quaternary, because of the previous work in this thread suggesting that quaternary's quadrotonous digits would prove a very serious problem, making all numbers above 256 have too many digits for convenience, many of which are repeated.

It goes without saying that these are all my opinions, after a theoretical look at bases that is assuredly far shorter than the ones some others on the forum have had.

(*) Perhaps one could get away with having binary-based hexadecimal numerals only for the youngest students, before graduating to learn a synonymous set that is not binary-based. But we would have to make sure the binary decomposition of each digit is securely remembered as though one's life depended on it, as otherwise we can't use hexadecimal.
Quote
Like
Share

icarus
Dozens Demigod
icarus
Dozens Demigod
Joined: Apr 11 2006, 12:29 PM

Oct 9 2015, 03:27 PM #12

This is a very well defined and highly refined analysis that deserves, perhaps, publication in the Duodecimal Bulletin somehow.

I am not sure it is complete but I like where it's rollin'.

I say this because out of this range it seems pretty clear that 12 must be the optimum base of general computation. The go-to base if a civilization were able to "pick" one.

We'd published Chris Osburn's assessment and this outdoes it. (It would benefit to read his paper, I can't find the link right now but will post soon).

One of the things that bugs me is that mathematics is "not sufficient" to "prove" which base is best for human general use because of that sticky little word "human". Once people are involved, we have social and cognitive science - soft science - which requires scientific research on how people react, control groups, experiments with statistically significant populations - messy stuff. Having two kids and tutoring many middle school kids, I've come to see that some patterns I thought were easy to memorize are not (I think largely because the way we teach kids to multiply has shifted. It does not as much involve rote memorization of the "lines" in the table, something I think is superior to how we try to teach today. Now they learn "postage stamps" in the table, i.e., 5 x 6, 6 x 5, 4 x 6, 6 x 4, something like that.) It seems that facts like 7 x 8 and 6 x 9 and their commutative reversals are the hardest for kids to know. They get 54 and 56 mixed up. I never had a problem with those two because I memorized the 9s and they were really easy to "re-concoct" while performing the operation. 9 x 6 = 54 because I take 6 - 1 and write 5, then the next number must be 9 - 5 = 4 (a sort of Wendy-esque production, perhaps, but it worked through second through 8th grade). The sevens were the worst to memorize. The thirds seemed easier because they were often needed. This said, my experiences are anecdotal and idiosyncratic; the kids I tutor often have AD/HD to some degree and I have "Dabrowskian overexcitability" and trauma from psychological abuse in early childhood with follow-on peer abuse in middle and high school and those issues that caused me to "flee to and embrace" math and creativity, so not exactly the best "subjects" upon which to base any social study.

This sticky pickle, the cognitive-social science aspect, is the only reason why I won't ever say that I could prove that dozenal is the best base. I can't point to a study. The study would be very difficult indeed to perform. Perhaps it could be modeled in a computer simulation. How do you model teaching 6-8 year old kids dozenal arithmetic versus decimal?

The closest thing we can do is just exactly what Double Sharp is attempting here (and what Osburn did before that). Use number-theoretical concepts as guides, and the knowable acquisition and implementation obstacles we could foresee and say we can't know for sure, but there are these rough tiers that get set up with ranking we could project based on the NT concepts, and thereby we have this constellation.

The nitty-gritty of which is superior, {12, 6}, {8, 10} in these pairs, it's awfully hard to know for sure. I have a feeling that 12 is better than 6, that 6 can be acquired much more quickly but there are non mathematical implementation problems (human cognition) associated with 6 that 12 doesn't have. The space between 10 and 12 for me has drawn a little closer due to the omega issue with 10, but still it's pretty clear that 12 is far more flexible than 10 and I can still put distance between the two. 12 is justifiably superior to 10 and I think to 6, but {12, 6} is a cluster above ten. What I can't do is for sure disqualify 6 on word length/digit monotony. I think word length is a drag on 6, like a swimmer in a race whose jammers are stretchy versus another with new jammers. That drag is pesky and will slow the competitor despite strength. Senary is pretty close to not having that problem. Octal I am sure really doesn't have a word length/digit monotony problem and quarternary quite clearly has it (This year is decimal 2015 but 133133_4, 13155_6, 3737_8). Once again we have that "subitization" limit of 6-7 creeping in but working contrary to what you'd expect. It's really the closeness of the elements in the pairs that I can't for sure solidify without social science.

Maybe steering clear of an absolute declaration and using the rough "learning horizon" and "word length/digit monotony", etc. as delimiters in lieu of exhaustive cognitive studies or simulations that will likely never be done professionally is the way to go. (We can do a nifty and uncomplicated university level study of word-length and digit-monotony; perhaps these have already been done and we can tap the results of those papers. Learning horizon vis a vis number bases and arithmetic seems much more complicated.)

You see why I never went down this road? I congratulate you on having done it nicely thus far.

I apologize if this pours a lot of murkiness in this clear glass of water. I guess these are things that I think about whenever approaching that list of good bases. It's like trying to divine which girls are my daughter's friends in middle school. Is it {Sacha, {Quinn, Shelby}} + {Jessica, Maddie, Abby, Bella} or {Sacha} + {Quinn-Shelby} (maybe even "Quelby") + {Jessica-Maddie} + {Abby, Bella}? I know who's involved but not exactly the precise pairing/grouping and their hierarchies with one another. But I know that Sacha is the closest out of all of them and maybe that's all that matters.
Quote
Like
Share

Double sharp
Dozens Demigod
Double sharp
Dozens Demigod
Joined: Sep 19 2015, 11:02 AM

Oct 9 2015, 03:39 PM #13

Getting Hexadecimal into the Human Scale

[d]

Looking at its behaviour (that of a diluted double human-scale base, having four opaque totatives), hexadecimal is really akin to octodecimal and vigesimal, which are similar, and which nearly everyone would reject as being obviously too large for human arithmetic today, with a set of resistive products approaching the size of the entire decimal table.

Yet hexadecimal has one advantage over its less fortunate relatives octodecimal and vigesimal, and that is that it is a prime power. 16 = 2^4. Hexadecimal has an intimate relationship with binary (so does octal, but octal can stand on its own), and can use binary's small and trivial arithmetic tables instead of its own, by implementing it as Russian multiplication (mediation/duplation). (It works in any base, but it works best in a base that is a power of two.)

The effect of this is that hexadecimal can enter the human scale with some modifications. Other modifications may help highly composite mid-scale bases: the most important ones seem to be {3:6, 4:5, 4:6, 5:6, 6:10, 12:10} = {18, 20, 24, 30, 60, 120}. However, these would seem to be unstable and prone to decomposing into pure-radix systems. With hexadecimal, every place is treated the same, so it may be less prone to decay.

What follows is a brief sketch of what would have to be done to make hexadecimal usable. I will assume digits that transparently encode their constituent bits, i.e. digit-thirteen ("d") would make it clear that it is 8 + 4 + 1. Nevertheless, I will use the standard alphanumeric digits here for convenience.

[x]

Addition and subtraction
There is a cute way to add in hexadecimal with transparent binary digits. I'll assume Bruce Martin's, as they are conceptually the simplest. Like Icarus, I would make them clearer by drawing the 2's stroke through the figure and not any of the others.

Terminology: the stem of the digit is the vertical stroke in it. A slot is a position where a horizontal stroke can reach the stem. There is thus an 8's slot, a 4's slot, a 2's slot, and a 1's slot on each digit (depending on which stroke goes there). A slot is essentially a bit, or binary digit.

To add, we can simply superimpose the digits onto each other! The only rule is then that when two strokes coincide, we delete them both and add a stroke to the immediately next higher slot within the digit (and repeat this if necessary). If the top strokes coincide, we continue to the bottom slot of the next digit to the left.

This is basically an implementation of binary addition.

For subtraction, the algorithm would be to superimpose the digits once again, but cancel out coinciding strokes. We would need to make it clear which digits are from the top number (the minuend) and which are for the bottom (the subtrahend). If a stroke appears in the minuend but not the subtrahend, it is left untouched. But if a stroke appears in the subtrahend but not the minuend, we remove the stroke immediately above (if it's the top one, go to the bottom slot of the previous digit), repeating if necessary, and then give a duplicate stroke to the needed place in the minuend. We can then remove one of them.

This is essentially binary subtraction. Likely after a while we'd remember common results like 2 - 1 = 1, and not have to carry binarily to figure that out. Perhaps pedagogically we wouldn't use duplicate strokes (too hard to distinguish from two strokes on adjacent slots), but would use another symbol, like a ring around (or through in the case of the 2 slot) the right slot on the stem of the number.

A downside is that we will be doing more operations, but each one is trivial, and due to the spatial arrangement of digits, we will be essentially bundling four-place binary figures in comparing the digits of both numbers we are adding or subtracting.

Multiplication and division
The idea behind hexadecimal multiplication is to convert it to binary.

In binary, the only possible numbers to multiply by are 1 and 0: both are trivial. But these ones and zeroes can appear in any slot on a four-slot hexadecimal digit: the 8, the 4, the 2, or the 1 slot. We would have to remember how a single digit behaves when it is multiplied by 1 (trivial), 2, 4, or 8. To that end we would have what Icarus called a "roll table" which shows what happens when you repeatedly double any odd digit {1, 3, 5, 7, 9, b, d, f}.

1 → 2 → 4 → 8 → 10
3 → 6 → c → 18 → 30
5 → a → 14 → 28 → 50
7 → e → 1c → 38 → 70
9 → 12 → 24 → 48 → 90
b → 16 → 2c → 58 → b0
d → 1a → 34 → 68 → d0
f → 1e → 3c → 78 → f0

We could then break up a problem easily, e.g. f6 * 4e = 4af4:

Code: Select all

  1     f6
  2    1ec
  4    3d8
  8    7b0
10    f60
20   1ec0
40   3d80
------------
4e   4af4
(When the roll progresses to multiple digits, we have to carry it into the digit to the left. We only need to note the double, quadruple, and octuple of one of the factors, as the rest will be the same with padding zeroes. Then we sum the ones we need, remembering the binary decomposition of each digit - but with bitwise numerals we wouldn't need to.)

For division, we have to somehow implement binary division. Let's use 21 / 3 (decimal 33/3) as an example of how this would end up being for even a small problem within the standard multiplication table.

We'd see that "2" is smaller than "3", move forward a slot, and now consider a "phantom numeral" that had the strokes of the 2 and the top stroke of the 1. This would be a binary "100", and the last bit in it would be the 8's stroke of the units digit. Now we see that 3 is smaller than binary "100", and so we put a stroke in the last slot of this "100" - the 8's stroke of the units digit.

Now we know 3 * 8 = 18 from the rolling table (if we had a multi-digit quotient, we'd have to carry). So we subtract that from 21, and end up with 9. We may now recognise 3 * 3 = 9 through diffusion (it comes up often). If not, we will note that it is between 3 * 2 = 6 and 3 * 4 = c, and so we can put a stroke in the 2's slot of the units digit. Subtract 3 * 2 = 6, getting 9 - 6 = 3, which is obviously 3 * 1; so finish off by putting a stroke in the 1's slot of the units digit.

The units digit now has a 8's stroke, a 2's stroke, and a 1's stroke; so the answer is 8 + 2 + 1 = b. To verify, use the roll table to find b * 3 = (b * 2) + b = 16 + b = 21.

One drawback of this is that for division (and not so much for the other operations), it is sometimes necessary to think binarily, not forcing ourselves to the every-4-digit hexadecimal bundling. We saw an example above in combining strokes from different hexadecimal digits: this is when we crossed the gap in the binary division below. This may be confusing. We would need to perhaps teach the base as binary, and keep on emphasizing that the societal use of hexadecimal was simply a shorthand for binary. The binary mindset would then take hold.

I do hope that was clear enough to actually get my point across.

The binary equivalent is


Code: Select all

        _____1011_
     11 &#41; 10 0001
           1 1
           -----
             100
              11
             -----
               11
               11
               ---
                0
(I quoted this example from Dr. Math, and gave an explanation on how it would work in hexadecimal.)

[x]

Perhaps if these methods are easy enough, we may be able to "wean" people off the bitwise digits after a few years, and use less transparent but Hindu-Arabic-compatible digits, if people would remember the hexadecimal-binary decompositions and recompositions of each digit.

The end result would be hexadecimal implemented with binary arithmetic, a cunning way to work around the inefficiency inherent in strictly hexadecimal arithmetic.

(I may later do a follow-up on extracting square roots in hexadecimal through binary.)
Quote
Like
Share

Double sharp
Dozens Demigod
Double sharp
Dozens Demigod
Joined: Sep 19 2015, 11:02 AM

Oct 9 2015, 03:51 PM #14

icarus @ Oct 9 2015, 03:27 PM wrote: This is a very well defined and highly refined analysis that deserves, perhaps, publication in the Duodecimal Bulletin somehow.

I am not sure it is complete but I like where it's rollin'.

I say this because out of this range it seems pretty clear that 12 must be the optimum base of general computation. The go-to base if a civilization were able to "pick" one.

We'd published Chris Osburn's assessment and this outdoes it. (It would benefit to read his paper, I can't find the link right now but will post soon).

One of the things that bugs me is that mathematics is "not sufficient" to "prove" which base is best for human general use because of that sticky little word "human". Once people are involved, we have social and cognitive science - soft science - which requires scientific research on how people react, control groups, experiments with statistically significant populations - messy stuff. Having two kids and tutoring many middle school kids, I've come to see that some patterns I thought were  easy to memorize are not (I think largely because the way we teach kids to multiply has shifted. It does not as much involve rote memorization of the "lines" in the table, something I think is superior to how we try to teach today. Now they learn "postage stamps" in the table, i.e., 5 x 6, 6 x 5, 4 x 6, 6 x 4, something like that.) It seems that facts like 7 x 8 and 6 x 9 and their commutative reversals are the hardest for kids to know. They get 54 and 56 mixed up. I never had a problem with those two because I memorized the 9s and they were really easy to "re-concoct" while performing the operation. 9 x 6 = 54 because I take 6 - 1 and write 5, then the next number must be 9 - 5 = 4 (a sort of Wendy-esque production, perhaps, but it worked through second through 8th grade). The sevens were the worst to memorize. The thirds seemed easier because they were often needed. This said, my experiences are anecdotal and idiosyncratic; the kids I tutor often have AD/HD to some degree and I have "Dabrowskian overexcitability" and trauma from psychological abuse in early childhood with follow-on peer abuse in middle and high school and those issues that caused me to "flee to and embrace" math and creativity, so not exactly the best "subjects" upon which to base any social study.

This sticky pickle, the cognitive-social science aspect, is the only reason why I won't ever say that I could prove that dozenal is the best base. I can't point to a study. The study would be very difficult indeed to perform. Perhaps it could be modeled in a computer simulation. How do you model teaching 6-8 year old kids dozenal arithmetic versus decimal?

The closest thing we can do is just exactly what Double Sharp is attempting here (and what Osburn did before that). Use number-theoretical concepts as guides, and the knowable acquisition and implementation obstacles we could foresee and say we can't know for sure, but there are these rough tiers that get set up with ranking we could project based on the NT concepts, and thereby we have this constellation.

The nitty-gritty of which is superior, {12, 6}, {8, 10} in these pairs, it's awfully hard to know for sure. I have a feeling that 12 is better than 6, that 6 can be acquired much more quickly but there are non mathematical implementation problems (human cognition) associated with 6 that 12 doesn't have. The space between 10 and 12 for me has drawn a little closer due to the omega issue with 10, but still it's pretty clear that 12 is far more flexible than 10 and I can still put distance between the two. 12 is justifiably superior to 10 and I think to 6, but {12, 6} is a cluster above ten. What I can't do is for sure disqualify 6 on word length/digit monotony. I think word length is a drag on 6, like a swimmer in a race whose jammers are stretchy versus another with new jammers. That drag is pesky and will slow the competitor despite strength. Senary is pretty close to not having that problem. Octal I am sure really doesn't have a word length/digit monotony problem and quarternary quite clearly has it (This year is decimal 2015 but 133133_4, 13155_6, 3737_8). Once again we have that "subitization" limit of 6-7 creeping in but working contrary to what you'd expect. It's really the closeness of the elements in the pairs that I can't for sure solidify without social science.

Maybe steering clear of an absolute declaration and using the rough "learning horizon" and "word length/digit monotony", etc. as delimiters in lieu of exhaustive cognitive studies or simulations that will likely never be done professionally is the way to go. (We can do a nifty and uncomplicated university level study of word-length and digit-monotony; perhaps these have already been done and we can tap the results of those papers. Learning horizon vis a vis number bases and arithmetic seems much more complicated.)

You see why I never went down this road? I congratulate you on having done it nicely thus far.

I apologize if this pours a lot of murkiness in this clear glass of water. I guess these are things that I think about whenever approaching that list of good bases. It's like trying to divine which girls are my daughter's friends in middle school. Is it {Sacha, {Quinn, Shelby}} + {Jessica, Maddie, Abby, Bella} or {Sacha} + {Quinn-Shelby} (maybe even "Quelby") + {Jessica-Maddie} + {Abby, Bella}? I know who's involved but not exactly the precise pairing/grouping and their hierarchies with one another. But I know that Sacha is the closest out of all of them and maybe that's all that matters.
I deserve a bulletin spot? (@_@;) Wow. That's really awesome to hear, as I have spent nowhere near as much time on this topic as you have.

I wrote an additional post above explaining how to cheat {16} into the human scale, despite it otherwise being more on par with {18, 20}, thanks to {16}'s nice prime decomposition of 2^4. So now I would say the only things that need to be considered are {(6), 8, 10, 12, (14), (16)}, as I've convinced myself that {16} cannot be dismissed out of hand due to instability-related or time-related arguments.

Yes, please: I'd really like the link to Osburn's paper! It would really help to see another's ideas on the topic.

Kids learn "postage stamps" now? I didn't know about this! It would probably have slowed me down, as I learnt from the full table and memorized patterns (like the psi pattern in 8, the omega pattern in 9, and rhythms like 3-6-9-2-5-8-1-4-7 and its reverse just like on a phone dial). It's interesting that you also mention that the patterns in the full table helped you. So maybe "postage stamps" would be a leveller of number bases, truly making each fact equal by isolating it in a vacuum.

I am not an expert in child education, nor do I play one on TV. It would be hard to perform a scientific and ethical (important!) study. In real life, how would the dozenally or octally trained kids adapt to our decimal civilisation? (Though since you have some experience, I feel like I have to ask how important you think the fingers' resonance with decimal is in early arithmetic acquisition.)

I plan to do another series of detailed posts comparing the remaining contenders {(6), 8, 10, 12, (14), (16)}, and perhaps getting closer and closer to a conclusion. I have to think about this some more, particularly on how to compare 16's algorithms with the others'.

No need to apologize for adding murkiness; this is a murky subject anyway! But I don't want to commit myself absolutely to getting behind a particular base until I have really convinced myself it is the best. Right now I think {12} is almost certainly the best but I'm not quite satisfied with the level of rigourous examination I've subjected it to yet.
Quote
Like
Share

Kodegadulo
Obsessive poster
Kodegadulo
Obsessive poster
Joined: Sep 10 2011, 11:27 PM

Oct 9 2015, 05:11 PM #15

Double sharp @ Oct 9 2015, 03:51 PM wrote:
icarus @ Oct 9 2015, 03:27 PM wrote: This is a very well defined and highly refined analysis that deserves, perhaps, publication in the Duodecimal Bulletin somehow.

I am not sure it is complete but I like where it's rollin'.
...
I deserve a bulletin spot? (@_@;) Wow. That's really awesome to hear, as I have spent nowhere near as much time on this topic as you have.
This does sound like it would make a good article. The Bulletin always has room for a number-theoretical paper. I suggest the two of you collaborate on a joint article summarizing these findings. It doesn't matter if your conclusions are incomplete or tentative at this point. You could certainly include a link to this thread "for ongoing discussion". Maybe even make it a call for action/research among the membership; someone out there with more of a professional background in cognitive science or childhood acquisition of numeracy might chime in.
As of 1202/03/01[z]=2018/03/01[d] I use:
ten,eleven = ↊↋, ᘔƐ, ӾƐ, XE or AB.
Base-neutral base annotations
Systematic Dozenal Nomenclature
Primel Metrology
Western encoding (not by choice)
Greasemonkey + Mathjax + PrimelDozenator
(Links to these and other useful topics are in my index post;
click on my user name and go to my "Website" link)
Quote
Like
Share

Double sharp
Dozens Demigod
Double sharp
Dozens Demigod
Joined: Sep 19 2015, 11:02 AM

Oct 10 2015, 06:32 AM #16

[url=http://www.dozenal.org/articles/db4a117.pdf]I found Osburn's article![/url] (^_^) I don't quite agree with the inclusion of square end-digits in it, because I don't see how this is really that important. Primality testing seems more important, as it relates to the number of totatives. Nevertheless, I think it has a more serious problem, and that is that while it claims to "quantify how a human will feel about each number base while counting and doing arithmetic", it does not factor scale into consideration. This is mostly because he counts regular digits, totatives, and square end-digits as percentages relative to the size of the base, neglecting that totative resistance as difficulty does not work that way. He fails to consider that vigesimal has the same totient ratio as decimal, but that means it has twice as many totatives that are single digits and are thus resisting arithmetic acquisition. This is what I called "dilution" above. It hurts to not memorize the h times table when working in vigesimal, whereas in decimal it doesn't matter if you don't know the 17 times table as that can simply be a compound operation. It is not the ratio that matters, it is the absolute quantity, which I tried to quantify above. He also credits omega factors, but not alpha factors. It is true that alpha is a little more difficult to use, but surely it isn't worth nothing. [d] After that hexadecimal diversion (we may have another later), it's back to comparing the human-scale bases, which we have sort-of-proven to be {(6), 8, 10, 12, (14), (16)}. What I am trying to do is to find a justifiable and well-argued case for their ordering in usefulness. Let's look at their prime factorizations again: 6 = 2 * 3 8 = 2 * 2 * 2 [color=red]10 = 2 * 5[/color] 12 = 2 * 2 * 3 [color=red]14 = 2 * 7[/color] 16 = 2 * 2 * 2 * 2 The two semiprimes highlighted in red, {10, 14}, have a gap in their prime factorization. This means they are tied down by weights to some extent. A larger prime will divide fewer integers (starting from zero and capping it at some limit) than a smaller prime, and hence the base's totient ratio will be higher than it would have been if it had incorporated a smaller prime. This also dilutes the multiplication table and makes it harder to memorize. The only plus is greater concision, but by that argument pure-base-2520 is far better than any of the human-scale bases, which is a patently absurd assessment if we are comparing {(6), 8, 10, 12, (14), (16), 2520} for use in society. Therefore, I do not think we can use concision as a [i]main[/i] criterion, although we can certainly use it when the more important ones are inconclusive. However, a base with a gapped prime factorisation does prioritise extrinsic properties over intrinsic properties. If we try to extend duodecimal's resolution to the prime 5 with an auxiliary base, we get the problem of emphasizing fifths over the other fractions, as if we take off the trailing zeroes from the number's duodecimal representation, we've removed most of the factors of 2 and 3 (the more important ones). Hence the vicious cycle of duodecimal auxiliaries: [z] 10 doesn't handle 5. Quintuple it: 50 emphasizes fifths too much, over halves. Double it: a0 has better halves, but thirds and quarters are worse than fifths. Triple it: 260 has better halves and thirds, but quarters are still worse than fifths. Double it: 500 emphasizes fifths too much, over halves... and the cycle starts again with an extra zero, like a mockery of hexadecimal rolling. [d] In this decimal and tetradecimal are superior, as the factors of 5 and 7 that would be present in every auxiliary base for them would be taken out by the trailing zero. Hence an auxiliary like decimal 60 works very well, much better than duodecimal z[50], as the fractions behave like senary fractions and not quinary fractions. A deficient base makes for a better auxiliary, while in duodecimal we would be confined to divisions like z[100] and approximating fifths with the square-alpha z[101]. Nevertheless, auxiliary bases have a problem in that they introduce a bit of radix-mixing into the original base. Choosing an exact power of the base (e.g. "100" or "1000") makes easy conversion between the fractions of the auxiliary and digital fractions expressed behind the radix point. And this argument for decimal and tetradecimal feels like ruining the base's own composition for external benefits ameliorating it. In the same way I could argue for undecimal, which has amazing extrinsic relationships (10 and 12 are neighbours, so the square-omega is the long hundred, 120, showering tons of undecimal numbers with extrinsic benefits), even though they don't actually help you [i]calculate[/i] with undecimal (as ease of calculation is based on [i]intrinsic[/i] properties)! Decimal is not an ordinary case of a base with a gapped prime factorisation, however. It is the only even one small enough to be certainly within the human scale (citation [i]not[/i] needed), and it even goes out of its way to mitigate things, as 3 is a decimal omega totative and decimal Wieferich prime. This means that the recurrence is single-digit (easy to recognise and use), and we cover multiple powers of 3 just like we would expect in a base that actually has 3 in its factorisation (and hence 9 as a regular digit)! Hence the gap between {10} and {12} is very close, as {10} behaves like a pseudo-5-smooth base. This post will thus instead concern itself with bashing my favourite punching bag among number bases: tetradecimal. [b]Tetradecimal[/b] Tetradecimal's prime factor of 7 does mean that auxiliary bases are forced to incorporate 7, which is perhaps a bonus. Now, while the relative usefulness of 7 is questionable (I think it is borderline: [url=http://z13.invisionfree.com/DozensOnline/index.php?showtopic=1361&view=findpost&p=22174538]see this post for some justification[/url]; this post also suggests from the SHCN sequence that 3[sup]2[/sup] and 2[sup]3[/sup] are important prime powers), it remains that 7 is well outside the subitizing range of most people. This has implications for tetradecimal metrologies, as I've [url=http://z13.invisionfree.com/DozensOnline/index.php?showtopic=509&view=findpost&p=22174725]posted[/url]: [quote="Double sharp @ Oct 8 2015, 01:05 PM"]There is yet another difference between decimal (10 = 2 * 5) and tetradecimal (14 = 2 * 7), that I realized while reading [url=http://z13.invisionfree.com/DozensOnline/index.php?showtopic=1002&view=findpost&p=22098145]this post[/url]. With decimal, (metric) rulers look like this: This is perfectly fine, as quinary subdivisions mean that there are four small marks between the bigger marks on the ruler, and 4 is unquestionably within the subitizing range. But with tetradecimal, they look like this: and now there are six small marks between the bigger marks (as the subidivisions are now septenary), and 6 is outside the subitizing range. This seems to suggest that any base that has 7 or any larger prime present in its prime factorization may be unusable by human society, as it would be that much less efficient to measure things in that base. With {10} it is quite possible and easy; with {14}, things look a lot more challenging to do at a glance.[/quote] The [i]only[/i] benefit tetradecimal's widely separated prime factors seem to give is its adjacency to the first odd squarefree composite, 15 = 3 * 5. This means that we have the alpha test available for the primes 3 and 5 (but not the prime power 9 = 3^2). Yet we are forgetting that the alpha test is more difficult than the omega test (you've got to get the alternating places just right, or sum two-digit numbers instead of one-digit numbers). Furthermore, 3 is important enough that its powers like 9 gain in importance too, and so if we skip 3, and leave 9 without a test, we no longer even manage to fake the all-important 3-smoothness - 3 is after all the second most fundamental prime, and we cannot avoid it even in decimal. And note how, when 3 and 9 are treated equally (like in decimal), we do not favour one excessively over the other when handling numbers and groupings (not divisions)! We do not go out of our way to avoid powers of three from accumulating, as they naturally would! In tetradecimal we would probably end up avoiding 9 even more than the already badly treated 3. The decimal set-up of {2, (3^2), 5} is probably superior to tetradecimal's {2, (3), (5), 7}, where parentheses denote indirect tests. And these are only the disadvantages specific to tetradecimal. The dilution issue (the multiplication table's increased difficulty and sparser regular numbers) is general, but applies just as well to {14} over {10} too. If we want auxiliary bases that incorporate 7, we may already be inflating the importance of 7 from "unimportant" to "important", given how little we use sevenths in our decimal civilization (where they look unimportant and are unimportant). If we look at [url=http://z13.invisionfree.com/DozensOnline/index.php?showtopic=1361&view=findpost&p=22174514]arbiteroftruth's post here[/url], we see that trying to have the primes {2, 3, 5, 7} covered in a proportionate ratio (making larger numbers have longer reciprocals) means that 2520 is the lowest candidate, which is [i]far[/i] beyond the human-scale (although it could be coded 12:15:14, this would be very unstable). So if we really need sevenths, our niche application is already inflating the importance of sevenths beyond its natural value. So what's wrong with decimal 840 as a seventh-including auxiliary base? It treats halves, thirds, quarters, sixths, and twelfths with sexagesimal cleanliness. It also treats sevenths (120) better than fifths (168), as befitting a seventh-needing application that itself inflates sevenths in importance. Decimal works well enough here. In the light of all this, I cannot see any reason why anyone would want to choose tetradecimal. Its gapped prime factorization, inclusion of 7, and lack of intensive effort being put in to mitigate its gap (like decimal does to ameliorate the status of 3) means that it is simply diluted for no good reason. Even hexadecimal is not as bad, as despite its greater size there is a way to abbreviate it and make it more efficient (as 16 = 2^4). With tetradecimal, we're stuck learning those tables, which are longer and have worse patterns than in duodecimal (it looks like decimal in some ways, as many duodecimal divisors are semidivisors or semitotatives, but is significantly harder due to its larger size and alpha dominance instead of decimal's omega dominance). Tetradecimal is in all probability the worst base to choose in the human-scale {(6), 8, 10, 12, (14), (16)}, and it was doubtful to begin with. In our ranking, we can therefore safely put it at the bottom. 1. 2. 3. 4. 5. 6. Tetradecimal (14)
Quote
Like
Share

LumenF
Casual Member
LumenF
Casual Member
Joined: Sep 13 2014, 01:30 AM

Oct 10 2015, 06:53 AM #17

There are a couple questions on my mind when I consider what base is optimal for human society. One being, "what is the ideal subunit?" Wendy mentions that the visual limit on an abacus is around five; looking at subitization studies and watching how people count unordered sets (by 1s, 2s, and 3s ime) I would have estimated it more at four. As much as I liked senary, upon first considering this question I found its less than ideal handling of 4 to be surprisingly awkward. It makes me wonder if 6, as a subunit, is just slightly too big. Probably a good question to test in cognition studies, as senary does seem to otherwise make a good system of measures.

Relatedly, "how much do people make use of the omega and alpha really?" Personally, I learned to make heavy use of decimal's omega even in direct calculations, but I find most people either attempt to calculate more precisely (in which case these aren't quite as helpful) or estimate less precisely (in which case they usually ignore this aspect entirely). That said, this aspect may become much more salient in a base like hexadecimal (or tetradecimal), where one can estimate 3rds and 5ths by the same trick. This would be even more difficult to test, as this concerns habitual use, but would clear up a great deal on this subject.

Of course, a lot of these problems can be benched if one simply limits the question to the ideal base for a subset of the population. Even if one has conclusive evidence that a given base is more efficient than decimal, how that will change the international use of decimal is another question altogether. On the other hand, if a segment of a population already makes regular use of, say, dozens, it would provide a context for doing these much needed empirical studies.
Quote
Like
Share

Double sharp
Dozens Demigod
Double sharp
Dozens Demigod
Joined: Sep 19 2015, 11:02 AM

Oct 10 2015, 07:07 AM #18

I think senary would tend to break up as a subunit, precisely because it's very highly divisible for its size. I think 12 does beat it because 4 is important, but then it seems to show that senary is too small. Quinary and septenary could not break up as they are primes, but would probably not be used repeatedly (instead doubling to reach the next grouping), which is as Icarus said a sign of a lack of confidence in grouping by awkward primes.

I learned and used the decimal omega test, but not the decimal alpha test. But I think that while they are nice sanity checks for calculations, that is not their main point. The main point is instead that the decimal omega, 9, fills in resolution for 3. Hence omega-related fractions like 0.333... and 0.166... are pretty common and easier to handle or recognise than even alpha-related fractions like 0.090909... or unrelated fractions like 0.142857142857.... This gives decimal pseudo-5-smoothness, as the omega fills in 3 to the second power. I see 33% and 16% discounts pretty often. Omega is not just a divisibility or sanity test: it improves the treatment of 3 generally, from "foreign intruder digital spy" into "VIP dignitary from the House of Threesmooth". You don't see alpha that much in decimal because 11 is practically useless. In tetradecimal and octal we would probably see it.

If we look only at niches, then the answer is simple. What do you need in your application? Use that to evaluate your base. If you are counting weeks, you need divisibility by 7. Evenness is an almost universal need, so the obvious choice is tetradecimal. If you need extremely high divisibility, but don't need to compute with the base, choose a super auxiliary like 360 or 2520. If you want binary compatibility that you can use, choose a power of two like 8 or 16.

But it is difficult to convert between bases on the fly, so it is probably easier to choose something that can handle most things that you throw at it. We want a flexible and efficient base that can be learned by most of society, and I think {12} would be the one of choice.

I doubt we will ever replace decimal. I think it's really too late for that: it's too entrenched in society at this point. The choice was made long ago, and we have to stick to it now (outside a few niches).

But I want to know how good that choice was.

(EDIT: Post number long hundred! It looks like I'll be noting SHCN milestones as well as the decimal and duodecimal milestones.)
Quote
Like
Share

Double sharp
Dozens Demigod
Double sharp
Dozens Demigod
Joined: Sep 19 2015, 11:02 AM

Oct 10 2015, 08:39 AM #19

Double sharp @ Oct 9 2015, 03:51 PM wrote: ...rhythms like 3-6-9-2-5-8-1-4-7 and its reverse just like on a phone dial)...
Incidentally, this memorization strategy works only if one of the neighbours of the base is a prime power (i.e. you have an omega- or alpha-related Wieferich prime to your base). So it works in decimal, where omega is 9 = 3^2. In hexadecimal, where omega is 15 = 3 * 5, you can lay out the numbers so that either factor is easy to memorize, but not both. It would also work in tetravigesimal, where alpha is 25 = 5^2, but that's too big. The human scale bases that benefit from this are octal and decimal, discarding odd bases out of hand. In both octal and decimal, it applies to 3 and its complement: this is very good as the complement of 3 (5 in octal; 7 in decimal) is the only completely opaque digit in the base.

If the relationship is omega, you take out zero and put it below the other digits. If the relationship is alpha, you'll have one space left at the end, which may perhaps be used for the decimal point.

Decimal example:

Code: Select all

 7 8 9
 4 5 6
 1 2 3
 0 &nbsp; .
Octal example:

Code: Select all

 6 7 .
 3 4 5
 0 1 2
Quote
Like
Share

Double sharp
Dozens Demigod
Double sharp
Dozens Demigod
Joined: Sep 19 2015, 11:02 AM

Oct 10 2015, 09:37 AM #20

I guess we're not going to have a post on hexadecimal root-extraction after all, as arbiteroftruth has already explained how to do that (and truly, hexadecimal's composition does make it easier than in any other base).

[d]

Time for another post on the human-scale bases! We've already disposed of {14}, so our next victim base to be examined will be the borderline {16}.

Hexadecimal

As stated above, we can't really use hexadecimal with hexadecimal arithmetic à la Stevin. However, since 16 = 2^4, we can use binary arithmetic! This is essentially treating {16} as simply a proxy for binary, or a quadruple-subdigit base {2:2:2:2}. And in contrast to other mixed radices like {6:10}, every digit is treated the same way in hexadecimal. So confusion will not be coming from that angle.

No, the trouble with hexadecimal is that using binary arithmetic algorithms may be slower (as each operation covers much less of the number), and may be confusing (because the steps needed may not always fold nicely into groups of four digits).

How much slower? Well, addition and subtraction are abbreviated almost down to pure-hexadecimal efficiency, because they can be reduced to simply overlaying the numbers to be operated on onto each other (although we'd need to memorize interesting carry rules). It is multiplication that sorely taxes hexadecimal, as we get this table:

[x]

* 0: 1 step
* 1: 1 step
* 2: 1 step
* 3: 2 steps (2+1)
* 4: 1 step
* 5: 2 steps (4+1)
* 6: 2 steps (4+2)
* 7: 3 steps (4+2+1), or 2 steps allowing subtraction (8-1)
* 8: 1 step
* 9: 2 steps (8+1)
* a: 2 steps (8+2)
* b: 3 steps (8+2+1)
* c: 2 steps (8+4)
* d: 3 steps (8+4+1)
* e: 3 steps (8+4+2), or 2 steps allowing subtraction (10-2)
* f: 4 steps (8+4+2+1), or 2 steps allowing subtraction (10-1)

[d]
Even assuming the lowest possible values (using subtraction for 7, e, and f), the average number of steps needed to multiply by a single digit is 1.8125 over the 16 hexadecimal digits is 1.8125, or close to 2. This value is 1 for all other bases, implying that multiplication is only about 55.2% as efficient in hexadecimal (we would get 1 if we could use pure hexadecimal arithmetic, but we can't). And even if we cap every base at 15, we get 1.0625 for tetradecimal, 1.1875 for duodecimal, 1.3125 for decimal, 1.4375 for octal, and 1.5625 for senary. Hexadecimal is then still significantly more inefficient than the other bases here! Splitting the operation binarily may turn hexadecimal from unusable to usable, but it makes the operation take twice as long, with many more intermediate steps!

We just can't beat the efficiency of a memorized table for multiplication. While binary is the obvious choice for splitting a hexadecimal operation, its splitting is the most baroque as it is the smallest possible base. (And no, we can't use quaternary instead, because then the memorization requirement extends to force the triples of each number to be memorized also, which falls outside the rolling pattern.)

Furthermore, if we try to use hexadecimal long division, we are stymied as we don't know the full hexadecimal multiplication table. But if we try to use binary long division, we have a problem once that quotient to be divided starts in the middle of one hexadecimal digit and ends in the middle of the next (as in the example I quoted above of dividing 3 into x[21]). And we have to use it for such small numbers, whereas in decimal we would instantly recognize 33/3 = 11 (alpha multiple), and in duodecimal we would have z[29/3 = b] down from the memorized multiplication table. We wouldn't recognise any multiples except those few in the roll table, which roughly correspond to the 1x, 2x, 4x, 8x, and 16x rows in the standard hexadecimal multiplication table. Even counting 3 * 4 as being the same fact as 4 * 3, the end result is that we have 15 known facts (many of which are trivial 1x and 16x) and 121 unknown facts. We only know about 12.4% (about an eighth) of our multiplication table, and hence end up having to use long division for about seven-eighths of the problems that would otherwise be easy! Even counting long division as only twice as convoluted as recognition on sight, that means that each long division randomly selected from the hexadecimal multiplication table is on average 1.875 times as difficult as it would be if we had a memorized table! We can bring hexadecimal to the human-scale, but only at the cost of halving its intrinsic efficiency that comes from its size!

The sad truth is that hexadecimal is not binary, and we cannot make it binary while still trying to retain hexadecimal's nicer scale. To do so creates a confusing and inefficient mix, and undoes the key nice benefit of hexadecimal's larger size and greater concision: fewer operations being required.

It's still better than tetradecimal in some sense, as its factorization is efficient, it has great binary resolution, and doesn't glorify the almost-needless prime factor 7. But I think both are too large and diluted.
Quote
Like
Share

Double sharp
Dozens Demigod
Double sharp
Dozens Demigod
Joined: Sep 19 2015, 11:02 AM

Oct 10 2015, 02:21 PM #21

It should be noted that the above post of mine only covers the problem of acquiring hexadecimal arithmetic, and does not look at other aspects of hexadecimal. We will attempt to remedy this in the paragraphs below. When we ignore the difficulty in acquiring it, hexadecimal is actually not too bad.

One of the key reasons to choose hexadecimal is its pure binary mentality. One can double and halve one's way (these are the two easiest operations to do mentally) from any quantity to any other, given an arbitrary number of steps. We can use the expansions 0.555...x and 0.333...x to zoom into thirds and fifths respectively.

A good point of hexadecimal is that {3, 5} are omega-related, despite not falling under the whole binary mindset. Nevertheless, 9 = 3^2 is not nicely handled, so this relationship falls just a little bit behind the decimal one to {3}. It's still better than octal in this respect with clumsier maximal thirds and fifths, though 8 has greater divisor density (50%) than 16 (31.25%).

The opaque totatives {7, 9, b, d} are not an issue in arithmetic acquisition, as they are in most bases, as we are trying to use mediation/duplation instead of Stevin's algorithms (although this cuts the efficiency in half). But they are still totatives, along with the unit and omega-related totatives {1, 3, 5, f}. Fully half the digits are totatives: hexadecimal thus shows totative dominance (more totatives than divisors; a property shared with tetradecimal in the human scale). Any of the eight odd digits {1, 3, 5, 7, 9, b, d, f} may harbour primes.

I'm not sure whether hexadecimal's lack of maximally recurrent decimals is a good thing or a bad thing. To get the multiples of decimal 1/7 (maximal), all you have to do is a frame shift:

[d]
1/7 = 0.142857 142857...
2/7 = 0.2857 142857 14...
3/7 = 0.42857 142857 1...
4/7 = 0.57 142857 1428...
5/7 = 0.7 142857 14285...
6/7 = 0.857 142857 142...

All we need to do is remember when the repeating block starts (and we can do this easily: just go up the digits in "142857" in increasing order).

With hexadecimal 1/7 (semimaximal, like decimal 1/13), the repeating block changes:

[x]
1/7 = 0.249 249...
2/7 = 0.492 492...
3/7 = 0.6db 6db...
4/7 = 0.924 924...
5/7 = 0.b6d b6d...
6/7 = 0.db6 db6...

So I am not sure which is better for memorization, recognition, and handling.

[d]
Lastly, hexadecimal has only one prime featuring in its prime factorization, 16 = 2^4. This means that it has fewer regular numbers than decimal or duodecimal, and thus has fewer snap-points to round to. On the other hand, this does mean it has infinite resolution for the prime 2, so duodecimal's auxiliary-base vicious cycle is avoided: you have infinite resolution for the prime 2 anyway, so you don't need as many occurrences of 2 in your auxiliary. Hence even 18x = 24d may be acceptable, and 3c0x = 960d surely is (and is still usable: you don't need 960 markings on your protractor - 480 will do, as you can get half a marking accurately via interpolation).

But all this is of questionable utility for society, as if you can't learn the arithmetic efficiently, you don't get to enjoy any of hexadecimal's good features till later. I think with hexadecimal we're still taking twice the time needed to learn arithmetic in {8, 10, 12}, just like in tetradecimal.

Hexadecimal does have good features. It is not obviously worse than octal when we factor out the differences mitigated by using mediation/duplation at bitwise numerals for hexadecimal. But I think the price of abandoning our current algorithms is a mite too high.

I have started to appreciate its beauty a little more after considering it in more detail now, though.

1.
2.
3.
4.
5. Hexadecimal (16)
6. Tetradecimal (14)
Quote
Like
Share

Double sharp
Dozens Demigod
Double sharp
Dozens Demigod
Joined: Sep 19 2015, 11:02 AM

Oct 11 2015, 06:56 AM #22

Not only were tetradecimal and hexadecimal of a dubiously large size to begin with, they also (alone among the bases being considered) have totative dominance. Let's consider the list of divisors for each base in the human scale I'm considering here:

6: {1, 2, 3, 6}. 4 divisors.
8: {1, 2, 4, 8}. 4 divisors.
10: {1, 2, 5, 10}. 4 divisors.
12: {1, 2, 3, 4, 6, 12}. 6 divisors.
14: {1, 2, 7, 14}. 4 divisors.
16: {1, 2, 4, 8, 16}. 5 divisors.

And now the list of totatives:

6: {1, 5}. 2 totatives.
8: {1, 3, 5, 7}. 4 totatives.
10: {1, 3, 7, 9}. 4 totatives.
12: {1, 5, 7, 11}. 4 totatives.
14: {1, 3, 5, 9, 11, 13}. 6 totatives.
16: {1, 3, 5, 7, 9, 11, 13, 15}. 8 totatives.

In fact, the only numbers with at least totative-divisor parity are {1, 2, 3, 4, 6, 8, 10, 12, 18, 24, 30}. The last three are too large and the first four are too small (not to mention that unary is a silly degenerate base), leaving only {6, 8, 10, 12} within the human scale. This demonstrates one of the bad side effects of dilution: as a number increases, so does its span of digits, which grow to include more prime totatives faster than the number can accumulate prime factors to ameliorate them.

This seems to show that {14, 16} resist more than they help arithmetic, exactly in line with the more detailed examination above. They probably cannot function as adequate replacements for ten in today's society, as we'd have to spend twice as much time on arithmetic.

It is probably still worthwhile to include {14, 16} for comparison, especially as hexadecimal gets so many advocates (who have probably only half-thought about the issue); but in the end the "stable" range seems to have duodecimal as the upper limit. That is in itself one argument for duodecimal, as it then maximizes concision while still keeping arithmetic as easy to acquire and use as it is now; but I'm getting ahead of myself. If we want a base that can step up and take the role of ten today, and leave everything else constant (assuming a magical instantaneous conversion of radix), only {(6), 8, 10, 12} seem up to the task.

What of the lower limit? Octal is assuredly within the acceptable range, as its concision is approximately the same as that of decimal. Senary is an interesting case. As shown above, in senary you'll need one more digit than decimal to express most numbers; but you'll also have one less digit available before the string of digits is likely to start having repeated digits.

This trade-off has consequences. Does the shorter string length of decimal help more in memorization, or the fewer possibilities for each digit in the string in senary?

I fear I may need to hold off my full investigation of senary (akin to my above bashing of {14, 16}) until I can find some studies on this. If I may use personal anecdotal experience (it's never a good sign when someone supposedly trying to do a fair and balance assesssment has to resort to that...), I'd say that it helps more to have more possibilities per string. For example, I've memorized the atomic numbers, names, and symbols of the elements (and most of the atomic weights to the nearest integer), so I find I can simply double the length of a string I can remember, by chunking decimal digits into twos and then converting them to atomic symbols!

97845160387524 - Bk Po Sb Nd Sr Re Cr (I'd use Fm, element 100, for "00")

So it appears that in my case, I can remember seven digits, but it doesn't matter if these digits are decimal or centesimal. It only matters that each one sounds like a single, non-decomposable unit.

In that case, senary may suffer a little, by taking a little more memory space with each number, by adding one more digit over decimal. We could use hexatrigesimal in conjunction with it as senary-compression. There are some warning signs flashing at that suggestion in my mind, as we're not using the base for itself anymore, but in conjunction with another. This would end up as using senary arithmetic with hexatrigesimal, and we all know how that turned out with hexadecimal/binary, as I just covered it. This criticism does not apply so badly if we do all our calculations in senary, and only pack up the final results in hexatrigesimal, instead of trying to mix bases, as senary's splitting of numbers into places is nowhere near as inefficient as binary's (again, only one more operation than decimal, and they are more likely to be trivial or end in zero if they are multiplications).

As it stands, though, the only argument I can come up with for sure is on the relative importance of the quarter, which is a key issue as 4 divides 12 but not 6. This is basically asking if doubling senary is worth it, as it does not actually dilute the base that much (we get an opaque {5, 7}, but we still have more divisors than totatives, and {8, 9, a, b} in the upper range are not terribly resistive like hexadecimal {9, b, d}).

In some sense senary is here behaving like a small base rather than a human-scale base, as doubling it leads to a human-scale base, and it shares with {2, 3, 4, 5} the property of having no opaque digits at all and a multiplication table that is so small (a third of the size of decimal's) that it could be memorized easily even if it were completely patternless, which it isn't as 6 is an SHCN. ({7} also has this property, as we can probably remember a string of up to seven different numbers.)

We are getting into the range where we need to go into even more detail in our examination of each human-scale base, and cannot throw them out based on general considerations like we can for {14, 16}. We need a detailed investigation. We need to imagine conducting the experiment, imagine a senary/octal/duodecimal civilization, go deep into it, and see what it would be like.

Who knows, we could make a popular alternate-universe scenario out of this...
Quote
Like
Share

Double sharp
Dozens Demigod
Double sharp
Dozens Demigod
Joined: Sep 19 2015, 11:02 AM

Oct 11 2015, 08:32 AM #23

Octal

Nevertheless, in the remaining group {(6), 8, 10, 12}, one base does stand out in its prime factorisation:

6 = 2 * 3
8 = 2 * 2 * 2
10 = 2 * 5
12 = 2 * 2 * 3

Only octal is a prime power, that has no other prime factor but 2. Hexadecimal is also like that, but we've already eliminated it.

It seems clear that octal is superior to hexadecimal. Even ignoring the previous demonstration that hexadecimal is probably not really usable in society, it remains that octal has only one opaque digit {5} while hexadecimal is four {7, 9, b, d}.

However, one important difference is that octal can be used in its own right, while hexadecimal cannot. So civilizational hexadecimal behaves like binary compression, while civilizational octal behaves like a totative-rich power of two instead, with totient ratio 50%.

Being a power of two has its perks. It means that you get infinite resolution for powers of two in divisibility tests, and can double or halve until infinity. On the downside, all other primes are treated poorly. Every third number is divisible by 3, and thirds crop up often even in our thoroughly decimal world (0.333... and 0.666... are common sights). Incorporating a factor of 3 instead of yet another 2 into the base improves the totient ratio from 1/2 to 1/3 (an improvement of 66.7%), while the base's magnitude only goes up by a factor of 3/2 (an increase of 50%). This trade-off is beneficial!

Everything is a trade-off. Most trade-offs involving increasing the size of the base are not beneficial, due to the dilution problem. But octal is so "inbred" that this trade-off ends up helping more than it hurts. In octal, the lines of {3, 5, 6} in the multiplication table are a harder to memorize, but in duodecimal, we get only {5, 7}, while {1, 2, 3, 4, 6} are truly a breeze (better than octal with just {1, 2, 4} being so).

Octal must be treated as a base in itself, not binary compression alone: and when we treat it so, we see that we cannot argue that binary is so fundamental that we should use a base that is this related to binary. At the magnitude of {2}, we can only fit one prime factor, which of course should be 2. But at the magnitude of {8}, we can fit three, and not all of them ought to be 2 if we want to maximize the base's usefulness. 2 is not the only important prime.

It is true that octal has better divisibility tests than hexadecimal, treating {2^infinity} within its sphere of regular numbers, {3^2} from the alpha, {5, 13} from SPD and {7} from omega, while hexadecimal has to make do with only {2^infinity} intrinsically and {3, 5} from omega. But octal having a means of testing for more primes (and 3 more deeply) does not make its fractions any nicer:

[o]
1/2 = 0.4
1/3 = 0.252525...
1/4 = 0.2
1/5 = 0.14631463...
1/6 = 0.1252525...
1/7 = 0.111...
1/10 = 0.1

[d]

Even hexadecimal has superior fractions, with 1/3 = 0.555...x and 1/5 = 0.333...x. Octal treats thirds even worse than decimal: if even the treatment of thirds in decimal leaves something to be desired (it's still not a divisor), octal can't possibly be better, to say nothing of senary or duodecimal with terminating thirds.

It is true that octal being a power of 2 gives it infinite resolution for testing powers of 2, unlike a base with other prime factors like {6, 10, 12} when the regular tests eventually scale out of usability. But including other prime factors gives more regular numbers as snapping points, and more terminating fractions. As I wrote:
Once we get out of 2's admittedly large sphere of influence, octal falls flat on its face, while repeatedly apologizing for messing things up and seeking to ameliorate the mess with its divisibility tests.
...
In the end, it comes down to flexibility, which is sadly not among octal's virtues. It's very focused and, while it can just about get a handle on some other small primes (a little better than hexadecimal in its range), it keeps on stumbling on them.

Even decimal is better, and duodecimal would be better still.
With its only redeeming features being its divisibility tests, and its usefulness in niches where binary is important, I think octal is probably the worst human-scale base. Every other competitor has more terminating fractions and regular numbers.

1.
2.
3.
4. Octal (8)
5. Hexadecimal (16)
6. Tetradecimal (14)

(This placement of octal is temporary, assuming senary is within the human scale. I need some more information before I can say with some conviction if it is or isn't.)

P.S. As for the relative valuation of primes and prime powers: my usual estimate is that if you pick a number at random, the probability that p will divide it is 1/p. Therefore 2 will come up more often than 3, which will come up more often than 5, etc. However, this may be arguable for prime powers, as 9 is composed of two 3's (more common) while 7 is composed of just a 7. If 3's are more common than 7's, the bias will be towards 3, so 9 gets inflated in usefulness. If one wants to account for this, we can consider instead the order primes and their powers pile up in SHCNs, which is {2, 3, 4, 5, 8, 9, 7, 16, 11, 13, 32, 27, 25, 17, 19...}. Since we see eighths (and borderline sixteenths) pretty often, but not elevenths and thirteenths I usually cap this list at 16 now, making 7 a borderline case (and I actually think for humans it is worse than the list shows, as it is well outside the subitizing range, unlike 5 which is just outside it).

P.P.S. Post number 200o! The 120's are an interesting decade, with the SHCN 120 and the prime powers 11^2, 5^3, and 2^7 all assembling together.
Quote
Like
Share

Double sharp
Dozens Demigod
Double sharp
Dozens Demigod
Joined: Sep 19 2015, 11:02 AM

Oct 11 2015, 09:41 AM #24

Senary

There are two issues at stake here when comparing {(6), 12}: decreased compression and the lack of a one-place quarter in senary.

Decreased Compression
Let's consider what the decreased compression really means. I'll compare {(6), 10} here, as {10} is kind of a benchmark for the compression of a typical human-scale base (what base is more certain to be in the human scale than our customary choice {10}?)

A useful rule of thumb is that senary in general needs one more digit than decimal, while octal and duodecimal are comparable to decimal in concision. This is true for 3-digit decimal numbers vs. 4-digit senary numbers. But we ought to bear in mind that hundreds of thousands are not uncommon in today's decimal world, and that 720 720d = 23 240 400h, two more figures.

Given two 6-digit decimal numbers, you have to multiply each digit of one by each digit of the other, making 12 operations plus a final addition. Some of these will be trivial multiplications by one or zero. The chance will be 1/5 in each digit except the first, where it is 1/10: but we'll ignore the restriction on the first digit, because then it allows us to include numbers with less than 6 digits in our survey. So 12 multiplication operations, of which 1/5 are trivial and need not be counted, means that on average you have only (4/5)(12) = 9.6 multiplication operations to do, plus 1 final addition. Rounding up, that's eleven operations instead of thirteen because some are going to be trivial.

How does senary fare in this regard? Senary will have more trivial multiplications, as the chance of encountering a 1 or a 0 is now 1/3 instead of 1/5. But you will need another two digits to reach the same order of magnitude! So you have 16 multiplication operations plus a final addition, making (2/3)(16) + 1 = 11.666..., rounding up to twelve operations. Senary seems to require one more step. And each product is going to take one or two digits longer to keep in memory (which is valuable real estate here) or write down. Since there are 8 intermediate results, that's 8 to 16 more digits written or memorized, while for octal it would be 0 to 7 (one more digit), and for duodecimal it would be even less. Senary's lack of concision appears to be a significant weight tied to it.

It is true that senary's compression can be accommodated when assigning codes by using alphanumerics, but this does not fix its lack of concision in arithmetic. True, you can reduce multiplication by single digits to rules (2 is doubling, 3 is halving plus a frameshift, 4 is doubling twice, 5 is a frameshift and then a subtraction), but again we'll need to keep more digits in memory, and the digits 4 and 5 end up being less efficient compound operations if we use these rules instead of multiplication tables. Converting to hexatrigesimal for storing results doesn't help: either you struggle with the more difficult arithmetic of base 36, or you waste time packing and unpacking and still don't get the concision advantage for actual calculations. Alphanumerics only give a nice base 36 in a decimal world: in a senary world (that is the same in everything except the number base), we wouldn't have digits {6, 7, 8, 9}, and alphanumerics would be base-32 instead of base-36: that wouldn't mesh very well with senary.

Carrying is a particularly annoying step in decimal mental calculation. You want to start from the left, as that's how numbers are spoken (and also so that you can impress your audience by starting to say the answer before you've finished computing it), but then carries are going to overwrite your last digits. Not counting the trivial facts with zero, the percentage of carries in addition is 60% for senary and 55.55...% for decimal. But what about multiplication? In senary, every single non-trivial multiplication but one (2*2=4) creates a carry (93.75% chance of a carry), while in decimal there are six (2*{2,3,4}; 3*{2,3}; 4*{2}) that don't result in carries, giving 90.625% chance of a carry. So in senary, not only are the numbers longer, we have more chance of getting annoying carries. Once the multiplication or addition requires thinking, we get a carry more often in senary than decimal - perhaps not very much more often, but in my experience often enough to notice, as the difference is magnified by the extra digit(s) that senary throws at us.

Quarters
Four is within the subitizing range. Senary does not have 4 as a factor. Duodecimal does. And quarters are more often encountered than fifths, so it's jarring to see senary 1/5 = 0.111...h but 1/4 = 0.13h, on a par with 1/7 = 0.050505...h.

It is also true that duodecimal is twice senary and so may emerge as a prominent super-grouping, just as vigesimal has emerged several times in history. Then senary would be lost, and the increased concision and efficiency might be felt as an advantage. Therefore we may argue that senary is not in the human scale, as there is a drive to improve it to a base that is twice its size. Even decimal (which isn't too bad, though deficient) has been doubled to get an abundant number, so why shouldn't senary (perfect) get the same deal?

The increased multiplicity of 2 in duodecimal over senary appears to be a good trade-off. It improves concision, and all but two rows (5 and 7) in the duodecimal multiplication table remain laden with beautiful patterns blooming with roses and sunflowers. (Following Icarus' colour scheme, naturally! ^_^)

Divisibility Testing
(This is probably not as important as the previous issues, but I included it for completeness.)

It is true that senary has alpha {7} and omega {5}, the next two unrepresented primes. It is also true that every prime in senary ends in 1 or 5, next to a multiple of the base, so that we can always use the trim-right test. One must nonetheless consider that primes above {7} are of rather questionable value, although it is true that this feature of senary provides greater flexibility while simultaneously not hobbling or diluting the base (as it would if you incorporated a factor of seven to get easy divisibility testing by seven).

Now senary's neighbours {5, 7} vs. duodecimal's {11, 13} come up more often and so grant senary more flexibility. But I don't see how this is worth giving up the greater concision and better treatment of 4. In fact, if I look at testing binary powers, duodecimal can handle {8, 16, 32} without breaking a sweat, while senary chokes on {16}: you've got to memorize eighty-one four-digit sequences, and range-folding this test requires recognition of fourfoldness at a glance. It may be doable, but it's costly and complicated enough that one might be persuaded to just halve and then test for 8. In duodecimal, the test for 16 is very easy: you need only look at the last two digits. Four should be a higher priority than five and seven (which is just a borderline bonus), and five still retains a divisibility test in duodecimal.

This is admittedly not so big an issue, as we live in decimal quite comfortably without trivial tests for {8, 16}. But then senary's longer expansion for 1/4 is somewhat unsatisfying, along with the conflation issue: 1/3 and 1/4 both round to 0.2h, whereas in duodecimal, 0.4z and 0.3z stand. We get less precision per digit in senary, and this really starts to hurt in metrology.

Since senary has problems that duodecimal ameliorates without generating more problems, I think we've shown that we gain by moving from senary to duodecimal. In senary, the problem is not so much acquiring arithmetic as in tetradecimal or hexadecimal, as we can learn senary arithmetic very quickly (the tables are trivial and small). It is instead using what we've learned, as all the extra digits and carries here and there add up to create inefficiency. Going for senary in an attempt to simplify arithmetic for the common man appears to work for arithmetic acquisition, but it seems to lead to greater inefficiency when you sit down and use the arithmetic tables you've mastered.

In the end, I can't say I'm convinced that senary is within the human scale. It may well be "metastable" due to its high divisibility, but I think that in the long run, we're going to lose it to duodecimal, its double. We can still use sixes as half-dozen subgroupings.

And if {6}, like {14, 16}, is not within the human scale, then even octal must outrank it for being within the human-scale trio of {8, 10, 12}.

1.
2.
3. Octal (8)
4. Senary (6)
5. Hexadecimal (16)
6. Tetradecimal (14)

P.S. An interesting point is that {6, 10} both appear to be aiming for pseudo-5-smoothness, but 6's attitude is to serve 3 before 5 (at the expense of its size), while 10's attitude is to serve 5 before 3, though treating 3 incredibly well (at the expense of its totient ratio, though the transparent {3^2} makes 3 almost look like it's not a totative after all). So I'm beginning to think that decimal does beat senary after all due to the scale issue.
Quote
Like
Share