Inverse radiation law:
\( F = c\frac{Qq}{4\pi r^2}\) Electrical charges.
\(F = c\frac{Mm}{4\pi r^2}\) Gravitational charges.
The full version of these is
\( F = zc\frac{Qq}{\gamma r^2}\)
\( F = z_g c \frac{Mm}{\gamma r^2} \)
Don't know where the surface density comes in from. Must be part of Kode's articifery.

wendy.kriegerDozens Demigod
 Joined: 11 Jul 2012, 09:19
Twelfty is 120 dec, as 12 decades. V is teen, the '10' digit, E is elef, the '11' digit. A place is occupied by two staves (digits).
Digits group into 2's and 4's, and . , are comma points, : is the radix.
Numbers writen with a single point, in twelfty, like 5.3, means 5 dozen and 3. It is common to push 63 into 5.3 and viki verka.
Exponents (in dec): E = 10^x, Dx=12^x, H=120^x, regardless of base the numbers are in.
Digits group into 2's and 4's, and . , are comma points, : is the radix.
Numbers writen with a single point, in twelfty, like 5.3, means 5 dozen and 3. It is common to push 63 into 5.3 and viki verka.
Exponents (in dec): E = 10^x, Dx=12^x, H=120^x, regardless of base the numbers are in.

OschkarDozens Disciple
 Joined: 19 Nov 2011, 01:07
Kodegadulo didn’t mention anything about surface density.
Because Z_{0}c = ε̅_{0}, and your metavariable γ corresponds to the surface area of a sphere (= σ srad), the “full” forms you’re advocating correspond directly to what Kodegadulo said in his last post.
That said, the first pair of equations is patently wrong, because it doesn’t take into account the dimensionality of the impedance and dimassic·action of free space. In practice, this means something like completely ignoring the masses and electrical charges in the system and only taking their effects into account: the influences and forces they effect on each other.
Because Z_{0}c = ε̅_{0}, and your metavariable γ corresponds to the surface area of a sphere (= σ srad), the “full” forms you’re advocating correspond directly to what Kodegadulo said in his last post.
That said, the first pair of equations is patently wrong, because it doesn’t take into account the dimensionality of the impedance and dimassic·action of free space. In practice, this means something like completely ignoring the masses and electrical charges in the system and only taking their effects into account: the influences and forces they effect on each other.

wendy.kriegerDozens Demigod
 Joined: 11 Jul 2012, 09:19
The dimensions are in the equations, not the quantities.
There are no metavariables in any equations. They are variables.
\(\gamma = 4\pi\beta\) actually corresponds to the inverse solid angle. Either you or Kode or both, have it wrong with solid angle.
For example, in Leo Young, we see that \(\Phi=SQ\), which is exactly as in light (lumens = solid angle * candela). One candle puts out 4pi lumens, and one UnitPole puts out 4pi maxwells of flux.
So given that \(\Phi = SQ = \int D.da\), we see that \(D = SQ/4\pi r^2\) and thus \(F = SQq/4\pi\epsilon r^2\), by using \(F=qE\) and \(D=\epsilon E\). So using \(\sigma\) in the way that Kode does, as 'solid angle', is wrong. It's the inverse of solid angle. Likewise, \(\kappa\) is the inverse of Young's U, which represents the turn in 'ampereturn'. The opposite unit is not radian, but 'curl', in statamperecurl = abampere. A curl = {c} turns.
The very origin of the NR rules used by COF and Primel, come from after many months of frustration, writing these two equations:
\[ F = \frac{cQq}{\gamma r^2}\qquad \frac Fl = \frac{2Ii}{\gamma c r}\]
When dimensions are applied, \gamma has the dimensions of conductance, and when values are applied, it is 1/(376.730 ohms). Unlike the CGS, these are singlevalued, practical size, (for fpsc, 3.98 volts, and 1/95.50 A), and already positioned to allow setting c=1. Unlike the previous rule (MI), where one sets Ampere's constant to 2/N, for some arbitary N, this one is entirely basefree.
For Kode to be using \(\sigma\) as solid angle, his equations would look like this:
\[ F = \sigma Qq/4\pi\epsilon r^2 \ = 4\pi\rho Qq/4\pi\epsilon r^2 = \rho Qq/\epsilon r^2 \]
where \(\rho\) is a steradian, ie the flux at one L from a source C is \(C\rho / L^2\), it gives \(C \sigma / 4\pi L^2\). This is how SI does light.
There are no metavariables in any equations. They are variables.
\(\gamma = 4\pi\beta\) actually corresponds to the inverse solid angle. Either you or Kode or both, have it wrong with solid angle.
For example, in Leo Young, we see that \(\Phi=SQ\), which is exactly as in light (lumens = solid angle * candela). One candle puts out 4pi lumens, and one UnitPole puts out 4pi maxwells of flux.
So given that \(\Phi = SQ = \int D.da\), we see that \(D = SQ/4\pi r^2\) and thus \(F = SQq/4\pi\epsilon r^2\), by using \(F=qE\) and \(D=\epsilon E\). So using \(\sigma\) in the way that Kode does, as 'solid angle', is wrong. It's the inverse of solid angle. Likewise, \(\kappa\) is the inverse of Young's U, which represents the turn in 'ampereturn'. The opposite unit is not radian, but 'curl', in statamperecurl = abampere. A curl = {c} turns.
The very origin of the NR rules used by COF and Primel, come from after many months of frustration, writing these two equations:
\[ F = \frac{cQq}{\gamma r^2}\qquad \frac Fl = \frac{2Ii}{\gamma c r}\]
When dimensions are applied, \gamma has the dimensions of conductance, and when values are applied, it is 1/(376.730 ohms). Unlike the CGS, these are singlevalued, practical size, (for fpsc, 3.98 volts, and 1/95.50 A), and already positioned to allow setting c=1. Unlike the previous rule (MI), where one sets Ampere's constant to 2/N, for some arbitary N, this one is entirely basefree.
For Kode to be using \(\sigma\) as solid angle, his equations would look like this:
\[ F = \sigma Qq/4\pi\epsilon r^2 \ = 4\pi\rho Qq/4\pi\epsilon r^2 = \rho Qq/\epsilon r^2 \]
where \(\rho\) is a steradian, ie the flux at one L from a source C is \(C\rho / L^2\), it gives \(C \sigma / 4\pi L^2\). This is how SI does light.
Twelfty is 120 dec, as 12 decades. V is teen, the '10' digit, E is elef, the '11' digit. A place is occupied by two staves (digits).
Digits group into 2's and 4's, and . , are comma points, : is the radix.
Numbers writen with a single point, in twelfty, like 5.3, means 5 dozen and 3. It is common to push 63 into 5.3 and viki verka.
Exponents (in dec): E = 10^x, Dx=12^x, H=120^x, regardless of base the numbers are in.
Digits group into 2's and 4's, and . , are comma points, : is the radix.
Numbers writen with a single point, in twelfty, like 5.3, means 5 dozen and 3. It is common to push 63 into 5.3 and viki verka.
Exponents (in dec): E = 10^x, Dx=12^x, H=120^x, regardless of base the numbers are in.

KodegaduloObsessive poster
 Joined: 10 Sep 2011, 23:27
Yep, that's Artificer philosophy in a nutshell: Physical reality doesn't have any bearing on what dimensions it's comprised of; it's whatever some Artificer can pull out of their, ahem, imagination, no matter how absurd, and craft into an equation, that trumps everything. Why, an Artificer can entertain multiple alternative "realities" at once as all equally "valid", even when they are mutually contradictory.wendy.krieger @ Jan 11 2018, 10:36 AM wrote:The dimensions are in the equations, not the quantities.
If the purpose of a symbol in an equation is to stand for a constant of proportionality, then by definition it cannot be a "variable". A constant can admit of only one value (including its dimensionality). If an Artificer nevertheless proposes that this symbol admits of multiple possible values, all equally "valid", then it is, by definition a "metavariable". Especially if the Artificer's excuse for the multiplicity is that different Artificers at different times have proposed different values for the symbol, and we must, of course, include them all. Of course, only one value can possibly reflect physical reality. Artificers ignore this, so enamored are they of their symbolic artifice. But if the "variability" is entirely a thing of the Artificer imagination, then it is "meta".There are no metavariables in any equations. They are variables.
Physics is about discovering how the real world works. Along the way, we craft symbols and formulas to describe it and understand it better. When those symbols and formulas fail us, we discard them and craft better ones. One does not burden successful descriptions with the yoke of dragging the failed ones along. Unless one is an Artificer, and obsessed with preserving all past artifice, simply because it was once published in a book somewhere.
As of 1202/03/01[z]=2018/03/01[d] I use:
ten,eleven = ↊↋, ᘔƐ, ӾƐ, XE or AB.
Baseneutral base annotations
Systematic Dozenal Nomenclature
Primel Metrology
Western encoding (not by choice)
Greasemonkey + Mathjax + PrimelDozenator
(Links to these and other useful topics are in my index post;
click on my user name and go to my "Website" link)
ten,eleven = ↊↋, ᘔƐ, ӾƐ, XE or AB.
Baseneutral base annotations
Systematic Dozenal Nomenclature
Primel Metrology
Western encoding (not by choice)
Greasemonkey + Mathjax + PrimelDozenator
(Links to these and other useful topics are in my index post;
click on my user name and go to my "Website" link)

wendy.kriegerDozens Demigod
 Joined: 11 Jul 2012, 09:19
You don't understand, do you?
Science is not about 'right' and 'wrong'. It's about working models, that are reproducable and reliability. Where thers vary from nature, the difference is an anomaly, which is dealt with in many different ways.
That we have special relativity, does not mean we abandon newtonian relativity. Newtonian relativity is what they teach in even university grade causes, and SR is an adjunct in some streams. We know that the anomaly between nature and newton, is less than the error for many cases, and while this is true, Newton serves the matter.
The dimensions are in the units. It is, as SJGould says, rules are made because they have been lead to something undesirable. A sign might say 'No dogs allowed', because people let dogs do things. But no sign says 'No snakes allowed', because people don't usually bring snakes. In Ireland and New Zealand, they do have laws of no snakes allowed, since they don't want a repetition of florida and the burmese python.
Likewise, as Paul Dirac notes, if you have the equation, you have the dimensions. The dimensions are a device for calculating, and reside in the units. Even so, they are further from the units than the units are from the measure.
Nearly every one of the hundreds of accounts that I have read of converting CGS from SI, etc, are based on the notion that quantities have dimension, and usually muddle through the process. I have not read one account that isn't clear. Not even Young.
Of course, we see rabbitears creeping in to cast doubt. There is no difference in setting \(\gamma=4\pi\), than using something like an #DEFINE# statement in C, or setting the value with the type. The purpose of \(\gamma\) is to write single equations for CGS and SI, so it's a matter of setting an IF_DEFINE statement. Really. The two polyglosses at http://www.os2fan2.com/gloss/index.html and http://www.os2fan2.com/glossn/index.html draw off the same code, with the headder file with a manual IF_DEFINE. Likewise, PHYSICS.PDF is governed with a switch in the header to write þ or th at the right places.
Science is not about 'right' and 'wrong'. It's about working models, that are reproducable and reliability. Where thers vary from nature, the difference is an anomaly, which is dealt with in many different ways.
That we have special relativity, does not mean we abandon newtonian relativity. Newtonian relativity is what they teach in even university grade causes, and SR is an adjunct in some streams. We know that the anomaly between nature and newton, is less than the error for many cases, and while this is true, Newton serves the matter.
The dimensions are in the units. It is, as SJGould says, rules are made because they have been lead to something undesirable. A sign might say 'No dogs allowed', because people let dogs do things. But no sign says 'No snakes allowed', because people don't usually bring snakes. In Ireland and New Zealand, they do have laws of no snakes allowed, since they don't want a repetition of florida and the burmese python.
Likewise, as Paul Dirac notes, if you have the equation, you have the dimensions. The dimensions are a device for calculating, and reside in the units. Even so, they are further from the units than the units are from the measure.
Nearly every one of the hundreds of accounts that I have read of converting CGS from SI, etc, are based on the notion that quantities have dimension, and usually muddle through the process. I have not read one account that isn't clear. Not even Young.
Of course, we see rabbitears creeping in to cast doubt. There is no difference in setting \(\gamma=4\pi\), than using something like an #DEFINE# statement in C, or setting the value with the type. The purpose of \(\gamma\) is to write single equations for CGS and SI, so it's a matter of setting an IF_DEFINE statement. Really. The two polyglosses at http://www.os2fan2.com/gloss/index.html and http://www.os2fan2.com/glossn/index.html draw off the same code, with the headder file with a manual IF_DEFINE. Likewise, PHYSICS.PDF is governed with a switch in the header to write þ or th at the right places.
Twelfty is 120 dec, as 12 decades. V is teen, the '10' digit, E is elef, the '11' digit. A place is occupied by two staves (digits).
Digits group into 2's and 4's, and . , are comma points, : is the radix.
Numbers writen with a single point, in twelfty, like 5.3, means 5 dozen and 3. It is common to push 63 into 5.3 and viki verka.
Exponents (in dec): E = 10^x, Dx=12^x, H=120^x, regardless of base the numbers are in.
Digits group into 2's and 4's, and . , are comma points, : is the radix.
Numbers writen with a single point, in twelfty, like 5.3, means 5 dozen and 3. It is common to push 63 into 5.3 and viki verka.
Exponents (in dec): E = 10^x, Dx=12^x, H=120^x, regardless of base the numbers are in.

KodegaduloObsessive poster
 Joined: 10 Sep 2011, 23:27
I know sophistry when I see it.wendy.krieger @ Jan 11 2018, 03:28 PM wrote:You don't understand, do you?
Science is about describing and explaining the physical world, as closely as we can manage, and as simply as we can, without introducing unnecessary junk into the explanation. That's one of the corrollaries of Occam's Razor.Science is not about 'right' and 'wrong'.
Yes. And the chief one being: Dispense with obsolete models when they fail to explain too many of their anomalies, and replace them with better models. Modern thermodynamics doesn't need to accommodate the phlogiston theory as part of some grand metamodel, simply because some obscure historyobsessed amateur might want to read really, really old phlogiston research papers.It's about working models, that are reproducable and reliability. Where thers vary from nature, the difference is an anomaly, which is dealt with in many different ways
What, pray tell, do the differences between Newton's and Einstein's models have to do with the dimensional analysis of the unit systems we use? I see absolutely no conflict with using the same units for both, and even for Quantum Mechanics. Time, space, mass, electricity, energy, etc., etc. are still measured in the same units in all these models. You can't just handwave at us and claim there is a conflict just to score rhetorical points  you have to demonstrate it.That we have special relativity, does not mean we abandon newtonian relativity. Newtonian relativity is what they teach in even university grade causes, and SR is an adjunct in some streams. We know that the anomaly between nature and newton, is less than the error for many cases, and while this is true, Newton serves the matter.
The sorts of insights that more advanced models bring tend to be that certain quantities that might already have been trivially commensurate before, but ignored, now come to be summed in unexpected ways. Given the time difference between two events \(\Delta t\), and the spatial differences \(\Delta x\), \(\Delta y\), \(\Delta z\), we could always say that \(c\Delta t\), the distance light can traverse during \(\Delta t\), was something trivially commensurate with the spatial differences, but of not much interest under Newtonian mechanics. The insight Einstein brought was that we could calculate this:
\(\displaystyle \Delta s = \sqrt{\Delta x^2 + \Delta y^2 + \Delta z^2  c^2\Delta t^2}\)
and find that this quantity, the spacetime interval between the two events, would always be the same in all inertial reference frames, even if there are differences in relative velocity. But this requires absolutely no change in unit systems or dimensionality from what Newton understood. This formula could have been written by Newton, he just wouldn't have known its significance or the immutability of it. We realize now that the speed of light \(c\) is a profoundly significant proportionality constant between space and time, allowing us to combine the two into something called spacetime. But it doesn't stop it from being a proportionality constant.
The doubt is already there, Wendy. You earn those "rabbitears" by engaging in your dubious arguments promoting your dubious notions. Scoffing at the criticism under the rubric "rabbitears" does not weaken the criticism in the slightest.Of course, we see rabbitears creeping in to cast doubt.
Hah! You just made my argument for me, Wendy: C preprocessor macros were one of the first (and one of the crudest) forms of metaprogramming. Except that you apparently don't know the first thing about them, because first you get the syntax wrong (the directives are spelled #define and #if, #ifdef, #ifndef, etc., no uppercase, no ending #). And second of all, those are called directives, which go to the C preprocessor, which is a textmanipulation program that runs during compilation time, a phase in which a program is being built, and not when it is actually run. Those are not statements, because that term is reserved for a line of code of the actual target programming language (in this case C) which executes at runtime.There is no difference in setting \(\gamma=4\pi\), than using something like an #DEFINE# statement in C, or setting the value with the type. The purpose of \(\gamma\) is to write single equations for CGS and SI, so it's a matter of setting an IF_DEFINE statement. Really.
A program text containing preprocessor directives is not an actual program yet. It is merely the template for a program. The preprocessor has to manipulate the text first to yield the actual program. Conditional directives allow you to derive multiple manifestly different programs from the same template, by changing some of the macro arguments. But those arguments no longer exist once the preprocessor has done its work. So they are not "variables" in the same sense as actual memory locations used in the actual target program.
The analogy with your metamodel is clear: You have these symbols that you use to represent metaarguments allowing you to consolidate manifestly different models of EM into a single template. You pretend that these metaarguments are "dimensions" in the same sense as the dimensions of actual physical quantities like length, mass, charge, velocity, energy, etc. After all, you use symbols for your metaarguments that are quite like the symbols used for actual quantities, without any special marking to distinguish them. But to yield real equations of natural law, you have to provide actual substitutions for your metaarguments. But different substitutions yield manifestly different, and inherently contradictory, physical laws. They can't all be correct. And you cannot hide behind the dodge that these are just "different unit systems". If you say that electric charge is just a derivative of purely mechanical dimensions, that is manifestly in conflict with the assertion that it is a distinct fundamental dimension in its own right.
Well, it's been fun debunking you all over again, Wendy, but now icarus's poor thread here has been hopelessly derailed. I would not blame him if he excised this whole latest backandforth and dumped it into an offtopic thread. I don't think I'll prolong this any further, it's gotten quite tedious and boring rehashing all the same drivel from you again. I suggest if you want to pursue this matter any further, you start a thread about it in your own subforum.
As of 1202/03/01[z]=2018/03/01[d] I use:
ten,eleven = ↊↋, ᘔƐ, ӾƐ, XE or AB.
Baseneutral base annotations
Systematic Dozenal Nomenclature
Primel Metrology
Western encoding (not by choice)
Greasemonkey + Mathjax + PrimelDozenator
(Links to these and other useful topics are in my index post;
click on my user name and go to my "Website" link)
ten,eleven = ↊↋, ᘔƐ, ӾƐ, XE or AB.
Baseneutral base annotations
Systematic Dozenal Nomenclature
Primel Metrology
Western encoding (not by choice)
Greasemonkey + Mathjax + PrimelDozenator
(Links to these and other useful topics are in my index post;
click on my user name and go to my "Website" link)

wendy.kriegerDozens Demigod
 Joined: 11 Jul 2012, 09:19
Kode has never used 'weave' and 'tangle', where there can be many programs that process the input file, and lead to output files. Even in my implementation of weave, there are weavecommands used in the load, and weavecommands that are used in the run. This is how it is possible to put great chunks of comments into a weave tape.
The speed of light, for example, descends from many different quantities, and merges with several more. When you talk of 'the speed of light', it's actually c/n, where c is the spacetime conversion, and n is the refractive index.
The propositions set forward by Einstein, for example, leaves in doubt whether it's c or c/n that is the relevant material. Maxwell demonstrated that the EM wave travels at the EM velocity \(\sqrt{\epsilon\mu}\), and the idea is that we don't know if c/n is constant in all inertial frames. If this is the case, it would cause all sorts of interesting issues down the track.
Likewise, my 'photon continuity equation' \(nE = zcD = nzH = cB\), is in response to various things about how electric fields travel across large regions where n and z might vary. The stuff they teach in college is really about the simple vacuum case.
Of course, we can easily exclude that we are dealing with 'metavariables', because it is possible to interpret \(\beta\) and \(\kappa\) in real terms, and then suppose that these could vary with space and time. To the extent that something like \(e^2/2h = \alpha\beta\eta\), would mean that if \(\alpha\) varies over time, as has been set forward, then either \(\beta\) or \(\eta\) must. But if your system does not see \(\eta\), then it leaves \(\beta\), a measure of the curvature of space.
In some part, the alignment against UES rule L has been relatively interesting in that it supports 'substances' (ie things that don't move at lightspeed), against 'nonsubstances' (ie those that do, except energy).
The speed of light, for example, descends from many different quantities, and merges with several more. When you talk of 'the speed of light', it's actually c/n, where c is the spacetime conversion, and n is the refractive index.
The propositions set forward by Einstein, for example, leaves in doubt whether it's c or c/n that is the relevant material. Maxwell demonstrated that the EM wave travels at the EM velocity \(\sqrt{\epsilon\mu}\), and the idea is that we don't know if c/n is constant in all inertial frames. If this is the case, it would cause all sorts of interesting issues down the track.
Likewise, my 'photon continuity equation' \(nE = zcD = nzH = cB\), is in response to various things about how electric fields travel across large regions where n and z might vary. The stuff they teach in college is really about the simple vacuum case.
Of course, we can easily exclude that we are dealing with 'metavariables', because it is possible to interpret \(\beta\) and \(\kappa\) in real terms, and then suppose that these could vary with space and time. To the extent that something like \(e^2/2h = \alpha\beta\eta\), would mean that if \(\alpha\) varies over time, as has been set forward, then either \(\beta\) or \(\eta\) must. But if your system does not see \(\eta\), then it leaves \(\beta\), a measure of the curvature of space.
In some part, the alignment against UES rule L has been relatively interesting in that it supports 'substances' (ie things that don't move at lightspeed), against 'nonsubstances' (ie those that do, except energy).
Twelfty is 120 dec, as 12 decades. V is teen, the '10' digit, E is elef, the '11' digit. A place is occupied by two staves (digits).
Digits group into 2's and 4's, and . , are comma points, : is the radix.
Numbers writen with a single point, in twelfty, like 5.3, means 5 dozen and 3. It is common to push 63 into 5.3 and viki verka.
Exponents (in dec): E = 10^x, Dx=12^x, H=120^x, regardless of base the numbers are in.
Digits group into 2's and 4's, and . , are comma points, : is the radix.
Numbers writen with a single point, in twelfty, like 5.3, means 5 dozen and 3. It is common to push 63 into 5.3 and viki verka.
Exponents (in dec): E = 10^x, Dx=12^x, H=120^x, regardless of base the numbers are in.

KodegaduloObsessive poster
 Joined: 10 Sep 2011, 23:27
As a matter of fact, I am familiar with Knuth's Literate Programming paradigm, and did play with his weave and tangle preprocessors, which are part of that. About 30_{d} years ago. It's a premiere example of metaprogramming. If Wendy insists on making my case against her metavariables for me, I certainly won't stop her. I wonder, though, how she presumes to know anything about what I have and haven't done in my programming career. But Knuth is hardly the last word in metaprogramming. It's a technique I've used and encountered in many different contexts using many different programming platforms.wendy.krieger @ Jan 18 2018, 08:59 AM wrote: Kode has never used 'weave' and 'tangle', where there can be many programs that process the input file, and lead to output files.
But what part of the following did Wendy not understand?
Really, icarus ought to.consider splitting the whole discussion of Wendy's obsolete EM notions, starting from around this post of hers from Jan 3, and move the lot into a new thread under Wendy's Cough subforum. And maybe some of the discussion centered around DK ought to be split off into another thread too. The diversions from the OP have gotten way out of hand.Kodegadulo @ Jan 12 2018, 06:07 PM wrote:Well, it's been fun debunking you all over again, Wendy, but now icarus's poor thread here has been hopelessly derailed. I would not blame him if he excised this whole latest backandforth and dumped it into an offtopic thread. I don't think I'll prolong this any further, it's gotten quite tedious and boring rehashing all the same drivel from you again. I suggest if you want to pursue this matter any further, you start a thread about it in your own subforum.
As of 1202/03/01[z]=2018/03/01[d] I use:
ten,eleven = ↊↋, ᘔƐ, ӾƐ, XE or AB.
Baseneutral base annotations
Systematic Dozenal Nomenclature
Primel Metrology
Western encoding (not by choice)
Greasemonkey + Mathjax + PrimelDozenator
(Links to these and other useful topics are in my index post;
click on my user name and go to my "Website" link)
ten,eleven = ↊↋, ᘔƐ, ӾƐ, XE or AB.
Baseneutral base annotations
Systematic Dozenal Nomenclature
Primel Metrology
Western encoding (not by choice)
Greasemonkey + Mathjax + PrimelDozenator
(Links to these and other useful topics are in my index post;
click on my user name and go to my "Website" link)

SenaryThe12thNewcomer
 Joined: 01 Mar 2018, 14:03
If there were a "one true base" to rule them all, a coherent measurement system would be the natural way to go. But alas, there is no such thing.
When you are using a base which is *almost* perfect for your task at hand, a judicious choice of a noncoherent unit of measurement can go a long ways towards smoothing out the difficulty.
In fact, one way of thinking about it is that you are deploying an auxiliary base. E.g. measuring length in feet can be seen as a way of using base 12 as an auxiliary base to base 10, forming a kind of 10on12 twelfty system.
When thought of this way, and noticing the plethy of noncoherent units of measurment in actual daytoday use, it becomes apparent that mixedbase arithmetic really isn't some exotic, hardtolearn technique, but is actually commonplace and almost ubiquitous.
When you are using a base which is *almost* perfect for your task at hand, a judicious choice of a noncoherent unit of measurement can go a long ways towards smoothing out the difficulty.
In fact, one way of thinking about it is that you are deploying an auxiliary base. E.g. measuring length in feet can be seen as a way of using base 12 as an auxiliary base to base 10, forming a kind of 10on12 twelfty system.
When thought of this way, and noticing the plethy of noncoherent units of measurment in actual daytoday use, it becomes apparent that mixedbase arithmetic really isn't some exotic, hardtolearn technique, but is actually commonplace and almost ubiquitous.

icarusDozens Demigod
 Joined: 11 Apr 2006, 12:29
I use mixedradix arithmetic constantly. Not only in my own exotic concoctions (base infinity : dozenal : octalhexadecimal, etc.) but in hoursminutesseconds, degreesminutesseconds, feetinchesfractions thereof.
People tend to use small numbers in daily life, in the tens and hundreds (dozens and grosses). In this scale, dozenal is pretty good. When we get into thousands, we'd be sore for a factor of 5. The situation is, by that point people are wont to use calculation devices.
Therefore I support your assertion, hexie (I lurrrrve the avatar. I have a brass hexhead down in the crypt and once used it to fasten a piece of metal to a damp area, but overtightened it and it semimelted from the force and friction. That was a big mistake, but it looked nice and golden).
But that doesn't nullify the validity or interest in studying a coherent twelve system.
People tend to use small numbers in daily life, in the tens and hundreds (dozens and grosses). In this scale, dozenal is pretty good. When we get into thousands, we'd be sore for a factor of 5. The situation is, by that point people are wont to use calculation devices.
Therefore I support your assertion, hexie (I lurrrrve the avatar. I have a brass hexhead down in the crypt and once used it to fasten a piece of metal to a damp area, but overtightened it and it semimelted from the force and friction. That was a big mistake, but it looked nice and golden).
But that doesn't nullify the validity or interest in studying a coherent twelve system.

SenaryThe12thNewcomer
 Joined: 01 Mar 2018, 14:03
My favorite vestpocket specimen is how rap stars brag about their gold chains. "Its 36 inches long, with 10mm wide links. Half a kilo of 18 caret gold"I use mixedradix arithmetic constantly. Not only in my own exotic concoctions (base infinity : dozenal : octalhexadecimal, etc.) but in hoursminutesseconds, degreesminutesseconds, feetinchesfractions thereof.
These guys are not generally known for their advocacy of mixed and alternate number bases, but nevertheless, they are experts at them and effortlessly deploy them. They fit so naturally into their way of life that they don't even notice they are doing it.
True, but cheap calculation devices are a doubleedged sword: yes, they kind of take the pain out of using a strict coherent unit system. But they also take the pain out of using a mixed unit system, by making it easy to convert between units. That's probably why cheap calculators haven't killed all these noncoherent units we use all the time. When we are estimating or doing it in our heads, we can remain free to choose the units which make that the most easy, knowing that when we have to calculate the precise answers we can let the calculators do the heavy lifting for us.The situation is, by that point people are wont to use calculation devices.
*chuckle* you are nobody until Icarus gives you a cool nickname ;)Therefore I support your assertion, hexie (I lurrrrve the avatar.
Agreed. In fact, until you do such an exercise, you don't know for sure where the real pain points are. Whether throwing in a noncoherent unit actually fixes a problem, or is just more adhockery.But that doesn't nullify the validity or interest in studying a coherent twelve system.