*Saturday, September 29 ^{th}, 2023, at 5:16 PM Rio de
Janerio Time*

*Author: Dr. Mattanaw, Christopher Matthew Cavanaugh,
Retired*

Interdisciplinarian with Immeasurable Intelligence. Lifetime Member
of the High Intelligence Community.^{6}

- Masters Business & Economics, Harvard University (In Progress)
- Attorney, Pro Se, Litigation, Trial, Depositions, Contracts (E.g.
State of Alaska v. Pugh, et. al.)
^{4} - B.S. Psychology, University of Maryland, 4.0, Summa Cum Laude
^{1} - B.S. Computer & Information Science, University of Maryland,
3.91, Magna Cum Laude
^{2} - B.A. Philosophy, University of Maryland.
^{3} - G.E.D., State of Maryland, Montgomery County, 1999.

Former Chief Architect, Adobe Systems

Current President/Advisor, Social Architects and Economists International.

Contact:

Resumé

*Saturday, September 29*^{th}, 2023, at 5:16 PM Rio de Janerio Time*Friday, September 29*^{th}, 2023, at 7:56 PM Rio de Janerio Time*Friday, June 24*^{th}, 2022, at 1:35 PM Alaska Time

The primary purpose of my work in mathematics is to update the field, as a whole, to fix some many problems in its implementation that relate to apparent artificiality, difficulty in learning for some, arbitrariness in symbology, and separateness from fields of application in the sciences. Scientific realism and modeling using mathematics is at odds with application, and this is the cause of the bifurcation between mathematics and areas in which thinkers believe it applies to nature. There is a simultaneous desire that mathematics retain its historical trajectory, and the desire that it also model nature. But the history and lineage appears to have created a pathway that keeps nature and math divided.

There are concrete problems of interest relating computing to mathematics that have confirmed repeatedly this insight I’ve had about the need to overhaul mathematics at a fundamental level. There is also an assumption that mathematics cannot be explained entirely using logical atomism, which inadvertantly implies that the logit of natural language and descriptions cannot explain math entirely. Making the assumption that logic cannot underly mathematics carries the implication that mathematics cannot be rewritten in natural language. Yet we hear that mathematics is a language. Furthermore, we hear that mathematics is “the language of nature” and yet language is the language of nature too. If they are not mutually translatable, then it appears that one must commit to the view that math cannot be taught in natural language without the same symbolic expressions. If natural language is to explain mathematics it will really require the logic that is present within natural language too. There are contradictory commitments in the field of mathematics and it is my work to resolve this issue and more specifically, concrete issues of intense interest that would change diverse fields. These specific issues, once solved, would verify my perspective, and would provide a persuasive argument or set of arguments that much of mathematics needs to change. This includes the commitment to symbolism present in all of mathematics, from history to present.

The principal binder theory that relates to the conrete problems I’m working to resolve is the Theory of Wanattams, pronounced “one atoms” or “wan atoms”. Notes of work relating to this growing theory are provided in this chapter of my Book and Journal. The outcome of the notes and work in this chapter will result in an initial book on the Theory.

Since much of what is here written impacts even fundamental arithmetic and insights regarding orginal application of mathematics, some will seem quite rudimentary. But like a normal curriculum in a mathematics program, it will build to complexity. There are interesting philosophical problems of basic mathematics that are visited and resolved while covering basic arithmetic. Later there is a transition to the utilization of a modified logic combined with natural language to prove various theorems. A new perspective on ‘equality’ is provided. New notation is introduced. As the work develops the new combination of symbols, logic, and application of natural langue, and proofs, will be used to communicate more sophisticated mathematics present in computing and the various sciences.

This work is not one that can totally rewrite all of mathematics, obviously, because the library of mathematics is large and extensive, and no person can master all fields. This is the assumption of the mathematician in the field of mathematics, so it is anticipated this would be well understood. Mathematically however, one need not prove that the assumptions of this paper need an entire rewrite to show that mathematics does need an overhaul. Instead some key areas of illumination and application that are general enough would show that they would permeate most math. My intention is to provide enough work to show enough indicators that firstly, the Theory of Wanattams is a Theory worth supporting, that it is instrumental, that it has better explanatory power than candidate systems, and that it really does solve problems of sufficient general interest in mathematics that it will be clear to readers that overhaul is required.

Along with the overhaul come omissions to the system, that would immediately indicate to the reader that if the omissions are needed, any system employing those omissions would need rewriting. This growing list of omissions is included below, and the development of this particular book will make all the specific omissions mentioned, providing the required example.

Additionally there are symbolic changes that are made to make the system less arbitrary and less committed to unnatural methods of expression. Those who are involved in computing would recognize that some equations of mathematics strangely include programming. Simple summations, derivatives, and integrals showing repeated calculations along a range are examples. It is clear to the author that these expressions antedate programming but include a programmatic style of thinking. There are programs in equations! But as written they are artificial, and again, do not lend themselves to combining the sciences, particularly with computer science.

This is a growing and living work, so the reader should anticipate the creativity of the author/mathemetician will result in additional desired symbolic changes, additions to the omissions of mathematics, and other improvements. The author has in mind improvements to symbols of numbers themselves, and this is a complex topic, that will require a gradual development and testing.

- Zero
- Decimals
- PI
- Infinitesimals
- Infinity
- Any number not of Base
^{1}unless it is totally required for illustration or for readability.

The reader will notice from the above list that some are obviously connected with arbitrary commitments in mathematics and have either limited applicability, or a theoretical aspect that would lead one to believe it does not relate at all to anything natural. Most have been uncomfortable with infinity for example. This has been removed and grounds for the removal will be explained. Zero is also a candidate for removal, having mathematical issues relating to definitions and the division by zero problem. The division by zero prolem indicates an issues with arbitrary rule inclusions related to earlier commitments to something artificial. Reasons for omission are discussed, and of course, it is not utilized to perform the same work.

For any of the growing list of items above eliminated from my mathematics will be a demonstration that the same work is achievable without them. This implies the system is more parsimonious and free of unjustifiable historical commitments.

- Old symbolic equations are depicted only as images.
- Typing of mathematics is strange and artificial. Interest in
changing the means of expressing mathematics in type is a non-trivial
change, but a necessary one that will be found to be agreeable. There
are mathematical justifications for the change of symbology to utilize
type, relating to efficiencies in publication. Mathematical
justifications for changes to symbology will be provided. The author has
commented elsewhere that mathematics has not been
*applied to itself*sufficiently, and a key example would be the mathematics of efficiencies and naturalism relating to historical notation.

- Typing of mathematics is strange and artificial. Interest in
changing the means of expressing mathematics in type is a non-trivial
change, but a necessary one that will be found to be agreeable. There
are mathematical justifications for the change of symbology to utilize
type, relating to efficiencies in publication. Mathematical
justifications for changes to symbology will be provided. The author has
commented elsewhere that mathematics has not been
- Those symbols depicted as images are done for reasons of efficiency, and will be replaced by plainly typed mathematical strings of text.

There is a confusion about mathematical modeling thinking that somehow, mathematics can be used to represent all of nature. It was stated above that there is a symbolic issue with mathematics, and also that there are descriptive issues that relate to the inability to combine mathematics at present with natural language and logic. Mathematics does have a very strong ability to represent very specific metrical information in a descriptive way, concerning specific natural phenomena. My thinking here is primarily directed to equational representation of natural phenomena and not diagrammatic, and it is understood that branches of math dealing with geometry and graphs do actually provide additional representative power. However, typically what is represented with equations of mathematics is highly specific, although those specific representations are widely general. Thinking of a simple example of the equational representation of motion in equation form, there is a highly targeted representation happening that is also very widely general. This can lead one to the confusions of overgeneralization that what one has captured from nature and represented in the equation is something that is more representative than it really is. It’s wide applicability does not imply that that wide application is still concerning a narrow specificity.

Joing combinations of mathematical formulae and equations are used together to depict increasingly complex phenomena, but the utilization of the joint equations does still focs on extremely focused quantifications and depictions. Probabilistic predictions using joint sets of mathematics combined are still highly focused in their interest and their powers of prediction are narrow.

Fields like economics striving to combine large amounts of information into sets of equations to represent the state of an economy, with aims to predict, and change trajectories, are still extremely focused and work is often to pinpoint social changes that can be made that will correct for certain unwanted conditions.

While it may be possible to further grow interrelationships in fields of science and mathematics such that the inputs and combinations of equations result in depictions of ever more complex phenomena, for use in simulations and the like, there are issues of representation related to the specificities of the formula.

Currently the method of going past the mathematical representations is to use natural language, large sets of data, and computer systems to create depictions. This again indicates that what is represented by specific mathematical formulae is very narrow. The combination of natural language, diagrams for communication, computer systems, and huge data sets, with mathematics, results in the greater joint representation of the natural world. It must be addressed what role mathematics itself has in this total depiction of nature, and it must be admitted that equations alone really do not have much power without connections to the other systems, that do not necessarily have a purely mathematical foundation. Expression of the different mathematics underlying different system utilizes different branches of math that have not been combined thoroughly. We pluralize the word math as maths probably partly to indicate that what we are utilizing are separate kinds of mathematics not jointly expressed, using methods that are at present non-combinable. It is not clear that one branch of math really is related to other branches when one looks at the symbolism and means of expression, and that it appears an intractable problem in the short term to combine mathematics indicates that equations and mathematics of branches like physics cannot be as expressive or instrumental alone as some might hope.

In order to increase the expressive representation of language to capture nature it will be necessary to combine languages. Natural language descriptiveness will need to be combined with traditional equations of calculus, and methods of mathematics from graph theory and geometry. These will need to be further combined with logic and computer programming.

Nowadays we have simulation tools like flight simulator that provides a visual simulation of many phenomena relating to flight, but the simulators are heavy on programming and this programming has not been combined with mathematics in a way that we could say that the resulting simulation is truly physical. Instead we have like with other games, something that is approximating true experience with the use of real testers who are oftentimes pilots, who will say that a particular aspect of the game “feels real” and would fool them into thinking that the visual simulation is the same as the natural world. Games like RPGs and combat games may also fool one into thinking that the game has properties of the natural world, and if good enough, players will feel that the game has a good resemblance to reality in various ways, like with human movement, quality of natural spaces, and precision in game play, like with shooting targets, throwing objects, or generally examining the environment in a way that feels natural. But all of this is very programming heavy, and is less interested in mathematical equations than one might suspect. Those producing the games are not that mathematical in their understandings and rely greatly on what was pre-built, and any math included has certainly not been reduced to logic of the system. Flight simulation for complex flight would be ever more precise for specific scenarious and would employ more physics and mathematics, particularly since flight simulation training relates to more complex planes, and military investments that allow for development with specialists who are physicists, aeronautical or otherwise, and mathematicians. There is a desire for greater ability to represent details of light that may risk pilots or investments. But the locations of these parts of the simulation are not the whole simulation, and again, the mathematics and equations of physics find in these locations very narrow specificity and application.

The purpose of this paper as was stated is to update mathematics or drive persuasion towards realization of specific needs to combine fields.

Simulations will only be fooling users in the most real-to-the-world complex representations, to thinking it appears “real enough” until fields are combined such that simulation really does get redefined fundamentally, making these representations more accurate. I would not necessarily think of this as a purely mathematical adventure, to finally “brint it all together”. Instead, I would think of the result as more fundamental and joint understanding and rewrite of natural language, logic, computer systems, and mathematics, such that they are more mutually cohesive and coherent. What we have at present is a patchwork approach to quilt working systems. What we want is more of a consistent product output that combines all together in a way that utilizes mutual representation.

This is a large objective, and for here the interest first is on the representation characteristics of old symbols of mathematics like those existing in calculus. From a reading of the above it would appear obvious that equations from famous authors and their papers, books and manuscripts are very narrow in their applications even if they are extremely pervasively broad in utility. Examples will be given throughout the text as equations appear and are utilized.

The conditional, or “if then” statement, is written traditionally, but not always, as it is here used below:

p > q

The truth of this statement has been given a truth table traditionally as follows:

p,q, p > q t,t, t t,f, f f,t, t f,f, t

Within the theory of Wanattams, the conditional is equivalent to the conjunction, meaning they have the same truth conditions and truth table. Howver, when the conditional is used, as necessary to convey a symbolic expression matching that of ordinary natural language, it will be understood to have the following truth table:

p,q, p > q t,t, t t,f, f f,t, f f,f, f

Which is equivalent to

p,q, p & q t,t, t t,f, f f,t, f f,f, f

It is not necessary that all permutations of the truth table have a logical operator for use.

The concept of validity, invalidity, and soundness hold new definitions in this context.

An valid argument is true, if the premises in the argument are true, and the consequent, physically relevant to the premises by a connection that indicates if the premises are true, the consequent must be true. If there is no physical relevance between the consequent and the premises, it is invalid. If the premises and the conclusion are both false it is also invalid. If the premises are true and the conclusion is false, it is also invalid. Under this interpretation of validity, a valid argument is also a sound argument. It requires that the premises are true and the conclusion is true. The concept of soundness is treated here as a mere synonym of the word valid.

The logical connectives, and, or, and if-then”, can each be replaced with a single logical connective, which can be interpeted as “incompatibility” or “not and”. It’s truth table is as follows:

p,q, p|q or p !& q t,t, f t,f, t f,t, t f,f, t

A triple comination of “not-and” creates an equivalent truth table with “and”, therfore one can rewrite “and” with not-and, wherever and is used. This statement will result in a still correct natural language statement to replace the statement originally containing “and”. This may be unusual, but an alternative language from a foreign nation, or a new language, could simply omit the word and and instead use incompatibility or not-and instead.

p,q, p|q, p|q, (p|q)|(p|q) or ~[~(p&q)]&[~(p&q)] t,t, f , f , t t,f, t , t , f f,t, t , t , f f,f, t , t , f

The concept of equality is left out of the theory of Wanattams, and instead uses no symbol for this. Instead of a symbol, another line is employed, or a separator. From one line to the next, transformations that result in translations, or reorganizations are used. The separator performs the same role as a new line, simply providing a demarcation between a symbolic statement, and another symbolic statement that is a translation.

Translations are categorized in a number of ways. A translation that is unsuccessful is just “not a translation”. A translation that is successful is “a translation”. An objective is to move from line to line and from one side of a demarcation/seperator to the other side of the demarcation/separator, preserving a translation. To preserve a translation one has conducted a logically valid operation of physico-symbolic manipulation. This is a conditional movement, that is redefinable in terms of the “and” operation, and is equivalent also to a use of a number of the nand/incompatibility operations. The last would not be known to be true except to the logician and so we put that off for now.

This means we will omit usages such as “=”, “==”, and “===” that occur in programs, mathematics, and the sciences and will instead prefer the separator and new line. In pseudocode, to make things clearer and actually usable for those who are using programming languages that currently exist, the “=” symbol will be used, but it must be understood, and recalled, that each and every time it occurs, it is a mistake, and instead it is only used tentatively as a shorthand for a separator or newline, in the absence of a change to the symbolism in programming languages. The reason why it is a mistake, is it preserves the tradition of using “equality” in a way that will perpetuate erroneous understanding. That will be further explained, in this text, and is explained partly in an approachable way in Abandoning Equality. Anytime it is seen in the pseudo-code it must be known that it means translation or transformation with relevance, according to the explanation above.

As was stated in the introduction, this work concerns the translation
of mathematics into base_{1} arithmetic. This means there is
only one digit that is used to express each and every number that
exists. This results in an interesting question as to whether there are
other numbers or not, apart from one. The conclusion is that there are
not any other numbers than one, and that each and every other increment
of one, results in a conjunction of ones. The traditional number two
from other bases, including traditional base_{10} arithmetic,
and 10 in binary (base_{2}), is represented as 11. One using
natural language might think, “Well that’s really just two”. According
to Wanattomian theory that is incorrect. It is 1.1 ones, which is to say
“it is one and another one ones”. Much of the work in number theory is
preserved because arithmetic of any base is a translation of
base_{1} arithemetic, however, the *interest* in this
kind of work is diminished because “special value” placed on specific
names of numbers is eliminated. The interest here instead, is all
regarding ones and any conjunctive combination of them. To provide
another example 111 is three traditionally, but in the Theory of
Wanattams it is 1.1.1, expressed as 111, or in the glyph transformation.
Using incompatibility, it can be rewritten as (1|1)|(1|1).

The only number to be defined is 1. The definition of one will be included here in the near future and it will be shown that it is a descriptive and statistical concept, relating to the physical world. There is no utilization of one that is not physical except for the singular application of the usage to itself. Since no other number is defined but one, there is no need for a definition of any other number, and the question as to the definition of numbers then is only about the definition of 1 as it applies to real world objects, and as it applies to itself, and all other numbers recieve no definition apart from being conjunctions of ones. A simple proof is provided to show that 1&1 is true.

1, 1, 1.1 t, t, t t, f, f f, t, f f, t, f

One and one together can only consitute a pair, meaning there are 11 ones, if their conjunction is true. Otherwise it is false. The conjunction of the two is never to be considered as a conjunct

*To add*

Programmatic completeness relates to the ability of a programming language, or set of languages, to carry out a large range of functional tasks, or all tasks that can be solved with programming, using some number of basic programming structures. What this amounts to is the way of performing tasks in a recipe like way using a set of predefined recipe making structures. Once the system for making recipes using these fundamental structures is in place, together they comprise the compleness of recipe making for anything in which recipes can be used. If a programming language is complete, then its structures are adequate for ensuring its completeness. Differing programming languages can be complete with differing structures, and this is well known and is partly the cause for having different languages, but more abstractly there are fundamental structures that must be in each and every language to make them complete regardless of what additional structures have to be in place.

There may be some argument as to what these control structures have to be, and I am uncertain as to whether or not any formal proof has been devised proving that one set of fundamental structures is better than all other candidate sets of structures. Here I may provide a candidate proof but that is prioritized based on available time and is deprioritized in relation to the total work of providing a justification for the system of Wanattams.

Although a proof of this has been deprioritized in this work, I do provide a highly minimal set of control structures and describe my own approach at programmatic completeness as it relates to the theory of Wanattams, and this information is abstract and general. Therefore it is applicable to more than the theory of Wanattams and programming more generally, and readers will learn things that are immediately useful and relevant to their own programming or mathematics they may be working on.

Once one is sufficiently advanced in programming one becomes aware, sometimes, especially if one has come from formal training in computing, that differing programming languages have a resemblance that is much stronger than their differences, and this relates very plainly to the underlying control structures. If one wants to write a program to do a simple task, and one can already envision the structure of the recipe that does the job, then one can immediately see how it could be coded in a number of languages and not just one, and really any language that can do it. Sufficiently advanced programs would be unworried about the prospect of simply writing their code down in another language, and it’s because they are sufficiently similar regarding the programming basics, and because it’s not too difficult to quickly learn the semantics of a new programming language in documentation in order to simply translate what one has in mind. It is differnt if a programming language happens to have very little documentation or no documentation at all– but that is a documentation problem and not a problem relating to the ease of performing the task if there happens to be suitable learning opportunity in documenation format.

In beginning computer science courses one might be exposed to the concept of pseudocode which I hold to be of very great importance. Pseudocode is nonfunctional code, written like a real program to indicate what the program should do and how it would really be written. Pseudocode is written just like standard code, only in a more abstract way as a preparatory step in planning for knowing how a program should work. When one writes pseudocode, one uses the minimal strucutres that would exist in the various programming languages, that provide their completeness. In a way, pseudocode is not unlike mathematical algorithmic code which on paper provides the way the algorithm would work, without actually executing as a program. Some mathematical expressions are really programmatic expressions, although that went unnoticed until programming came into existence, and is still largely not acknowledged. Mathematical proofs and algorithms provided the foundation for having executing programs at the beginning of computer science and the advent of digital computers, and prior to digital compouters. If one wanted to make and describe a computer and its operation one would arrive at the need to have programs, and would first need to, on paper write recipes for processes, which amount to writing programs with pseudocode onto paper.

If you write a program with pseudocode and it is well written, it will work once it is translated into any programming language. My learnings in pseudocode were what lead to the realization in my career that I really could easily program in any language.

In this writing I will make use of my own style of pseudocode. This
code will be usable for translating into any programming language
whatsoever. The pseudocode will be used to provide algorithms and proofs
of various kinds, and will be used to rewrite the symbolism of various
mathematical expressions, that are really programmatic. I will also use
it after explaining various control structures that I find fundamental
in computing and in mathematics. The control structures will be *the
only structures* I will use for any of this work.

Pseudocode provides a fundamental and naturalistic explanation for the control structures and their completeness. Since recipes amount to human processes for doing any kind of manual work whatsoever, if the structures are known that comprise all tasks and processes, they can be written in programming code form, again, pseudocode, to describe the process and the recipe. Pseudocode provides what is needed to describe what is essential to any process. Since the pseudocode maps to any naturalistic process, and recipe for any manual effort, behavior or work, and is usable for translation into any programming language that is complete and can carry out the task automatically, it really is representative of natural processes. The tightness of the mapping and the degree of scientific realism in the symbolism is something for philosophical and scientific debate, but its instrumentality is unquestioned due to the advancement of technology, and the utility in teaching anyone how to do anything whatsoever. Learning and teaching has recipe like structure too that is naturalistic and well represented by pseudocode. At a future time I will discuss the relationship of this perspective to artificial intelligence, where the mapping is not quite as clear for the complexity of the topic, and I do also desire to relate it to actual learning as it occurs in living brains that are active within different environments.

Now that I’ve explained the use of pseudocode and the importance of fundamental programming structures, I want to move on to the topic of what some of those structures happen to be. I will immediately depart from what is traditionally taught and will instead utilize my own set of control strucutes, that still do map to existing control structures in other languages. It is necessary to proceed this way because the Theory of Wanattams calls for a different approach to programming structures. The following is a list of structures that are required:

The word structure is somewhat ill-defined. I will seek to provide
better definition, but for now will mention it isn’t too different than
*rules*, and *instrumental chunks or recurring
ingredients* of the pseudocode and programming language. Here I also
mix structures and operations as being sufficiently similar for
inclusion in a list of fundamental programming ingredients. Don’t be
daunted by this listing, since some of this is highly technical in
meaning. Each word *does* relate to basic everyday experiences,
with normal words. Words for each point are interchangeable, with
explanation as to each meaning. Each meaning will be explained and will
appear in the glossary. Active words are indicated instead of passive
words, wherever possible, to capture the programmatic or process nature
of each structure. Each structure is part of a system architecture
intended to process and represent process.

- loop
- assign, let, equate (one meaning), set, define, separate, delimit, split, denominate
- parameterize, use
- symbolize (with symbol(s)), characterize
- function, process (and subprocess), file, segment (interchangeable), memory/storage location (what’s in it), data (or piece of data).
- nand, incompatibility, and-variant, add, concatenate (interchangeable)
- decide, if-then, case, select
- shift, locate
- location, pointer

Concepts also described in the Theory of Wanattams for use in mathematics will be explained in terms of this rudimentary list of structures.

I may reduce these further, finding some to be more funamental than others, such that some can be defined in terms of ones more basic. At the moment, it is important to note that binary already represents all these structures, so symbols or electrical signals are of great importance (although a computer does not have to be electrical), and what is included within the binary and electrical signals has to include a small set of structures, like those above, to be complete to carry out tasks. Symbol is definitely a primary structure, regardless of whether it is a visual symbol or not.

*Draft state, more soon*

- Equations will be here provided for reference, but will be parsimonious and only include those equations actually used or discussed.
- Equations themselves are redefined. For a partial existing treatment of the author’s views on the meaning and justification of equality, in the context of moral and social application, one can read Abandoning Equality.

The mathematical equations move from those that are simpler for the fundamental reconstructions of mathematics utilizing wanattomian theory. Following the utilization of funamendal equations is a movement to equations of greater sophistication from higher areas of mathematics. The progression is to provide a scaling proof, to illustrate the effectiveness of application not only for fundamentals but for areas of mathematics applied to the sciences.

*To Add*

*To Add*

*To Add*

The equations of Physics builds from the demonstrations using those equations from mathematics. Here it is demonstrated that the Wanattomian theory is applicable to questions of physics and not only of mathematics and can be used in active research in specific domains of physics covered. It cannot be exhaustive, but again the method is to provide a demonstration that scales, and covers all areas of interest considered, building additional credibility to the total theory that Wanattomian Theory can overhaul mathematics and the sciences. This is a sampling to illustrate that it can be utilized nearly any place in physics.

*To Add*

*To Add*

The equations of chemistry are used to inform on additional applications of Wanattomian Theory, and to provide an elucidation of some of the primary concepts of Wanattomian numbering.

*To Add*

The present work in progress on mathematics involves wanattams, or
units of the base_{1} number system, which the author is
utilizing for a replacement candidate for our current base_{2}
through base_{n} numbering systems, particularly the
base_{10} number system, commonly called our decimal system.

The conversion is not merely of semantic or translational interest, and is not trivial.

New writings will be added here in the near future. Handwritten materials have been created for this effort, and several scans are provided below, on the purpose and intent of this system of wanattams.

A “wanattam” is an alternative conception of an initial unit of numbering which is related to the use of the number one.

*Sunday, April 10 ^{th}, 2022*,

After reflecting not long ago on what the least arbitrary numbering system might be, and considering certain options like binary, as an afterthought I came onto the idea that simple hashing and counting is the simplest of all, and apparently not arbitrary. Furthermore, it appears all numbers can be represented using such a system, and it appears that people would forget that their existing base10 numbering system is a convenient way of abbreviating ones. Developing on this idea over the last year or so in my mind, and with some scribblings, I have made some progressions worth recording.

I will cover observations on each of the following:

- Base1 system appears to indicate there is not a need for deimals or a zero, as has been assumed by our existing systems we have been taught and use.
- Base1 system appears to offer some clarity on when numbering should occur and when it shouldn’t.
- Base1 system appears to provide clarity on what truth is and is not, in the related field of logic.
- A base1 system can be used for an alternative computer systems architecture. Instead of using binary, unary can be used. Base2 computer systems can be reinterpreted to be base1 systems potentially too (i.e. the zero in a memory word in computer system is not necessarily intended to be a numerical zero. Instead it is representative zero. Being a representative zero, it can be considered a gap or omission in unary.

First considering the initial point, let’s think about how base one could be used to remove zeros and decimals in the simple system of money. We are very famiar with using two decimal places to indicate cents in the United States, and non-decimal numbers to mean dollars. It would appear we may have some difficulty if we did not have a decimal or a zero. However, in a unary system of money, one only needs to recognize what the smallest unit required is an utilize that unit. So if a cashier tells you that you owe $13.48, you already recognize that that amounts to 1,348 pennies. Which means a unary system could be adopted with only a single money value at the root.

The use of 1,348 pennies still commits to decimals however. This is also understood, although I’d say we’ve forgotten it, that 1,348 pennies is merely counts of single pennies. Most are familiar with hash marks. Hash marks are like the plain one, without embellishments; a vertical bar. For each and every penny you provide of the 1,348 pennies, you could have first counted with hashing an not with decimals. This means you’ve written 1,348 ones, and each one corresponds to one penny. In this numbering system, there is the annoyance of having to write so many ones. And really, you do need to write each and every one, if you don’t have an alterntive one-recording system. Well, base10 is such a system. But base10 leads to confusions because it provides the illusion that certain numbers 1-10 have special properties. I.e. some think 7 has a special property, or that there is sevenness. There are cerainly pieces of knowledge relating to prime numbers that are of interest, but that is not the confusion I’m trying to indicate, and focus attention on. The main point is there is no 7 as a distinct number, that is special, apart from being a string of ones. Taken as a string of ones, its interest is diminished. Its relationship to superstition diminished. But more importantly, it is more clearly recognized that it is only a symbolic representation of a string of ones. Another way to make it more clear. Now that there are only ones in this unary system I’ve identified, just like with Chinese language, we could have a character for each and every number that exists. This also diminishes the supposed value of the symbols from one to 10 because all numbers could have unique symbols, or no unique symbol at all. All could just use 1.

Back to the issue of having to express large numbers like 1,348 with so many ones. This is an issue I am currently working on in my creation of a system of “oneing” that only uses ones but does so with much larger chunks of numbers than 10, and represents them according to requirements of perception. Here also there would be an expected ellucidation because now the symbols that represent numbers are more clearly and less-confusedly disentangled from the method of recording them. Now, it is more clear that the rules of recording numbers relate to limitations on perception, and on writing, and on printing with a computer, and on displaying variably under a large range of conditions. It is shown to be more arbitrary, and other pathways of numbering are offered up for consideration. Right now, there is a forgetfulness around numbers as being non-arbitrary, rather than arbitrary. Arabic numbering is an arbitrary system, and highly arbitrary, since apparently the amount of digits is related to our hands, which is not at all a good choice if one is wanting to be maximally unarbitrary. Notice a system of ones is the smallest possible system. When you count on your fingers, you still count ones. Other specis of animals would not choose 10, probably, but they would have no choice but to choose one. In this sense it is not even a speciesist system to use unary, but it is a speciesist system to use base10; or at least, it would be more inviting to other species having 10 fingers or toes like we do, but looking across the animal kingdom, it appears very many animals do not have 5 fingers and two hands. This appears to be a consideration that is not necessary. However, there are other relationships the numbering system has to non-animal considerations, like creating computers. And optimal ways of creating computers have nothing to do with fingers. But better still, mathematics is something that strives for permanence and timelessness. If permanenet many billions of years must be now accounted for in ones, and must include any animal that might use the system that might be offended if they are not numbered among the mathematical. Also, it is to be expected that animals would be victims of systems employing nummbers, and it is useful to not forget that they could be beneficiaries of the math and not only humans, in ways the reader may not recognize immediately. We should not consider that we would be the only species forever to use math, particularly because in the near future, homo sapiens would not be the species that would be using that math that we are creating. They may not even have 10 fingers and 10 toes, because likely that will be something we can choose and design for, and later it is highly likely other options will be considered and chosen. In any case, however, it appears ones are required and are non-biased. It appears that base1 is the least arbitrary number system, apart from our special needs of reading, recording, and perceiving differences. My new numbering system will address these issues as the arabic numbering system already has, but without forgetting that that is a separate consideration from the numbering of things in ones.

*[Stopping point, 1:33 pm. Total writing time 19 minutes with no
edits]*

*Saturday, July 30 ^{th}, 2022*,

In the prior section a simple example case where replacement of a
base_{n} sytem with a base_{1} system of wanattams
is

- feasible
- removes the need for the decimal place
- removes the need for zeros

It was also discussed, why a system of base_{1} is a
non-speciesist, more future-resistant non-arbitrary system of numbering,
because while alternative systems have in their history made commitments
relating to cognitive limitations, or numbers of fingers and toes, this
system has no such commitments. It was discussed that this system is a
necessary system because whatever system is employed, at a minimum
counting with ones is required.

It may be additionally remarked, that any non-base_{1} system
is already considered a translation equivalent to any other system of a
non-one base. A base10 system, being a translation of a base1 system,
can be rewritten as a base one system, which implies that all
mathematics relying on base10 is simply another way of writing base1.
The cause of not using base1, it was discusses, relates to the need for
not writing too many digits when representing large numbers.

The present section has the following additional developments and interests:

- Wanattams or base
_{1}can clarify when names can be applied and cannot be applied. - Rules for the application or naming of things with wanattams, determine, in conjunction with the practical/pragmatic needs for using math, what operations may be employed. In the system of Wanattaming, it is not assumed for example that division may be employed in any scenario formerly considered standard for division.
- Avoidance of fractions is discussed, in the context of the earlier avoidance of using a decimal, or period with decimal digits, and the additional discussion, of the relationship of division to naming.

Let’s develop these points with a simple case of deciding how one will divide a pizza. This is one very simple example that is intended to be illustrative of a much wider and abstract need, which will be used as a starting point for subsequent examples which will be increasingly complex. The use of the division of a pizza is uniquely interesting in that it is very straightforward and well understood in our experience, but can be used to show that there are very serious errors in our usage of mathematics, and our assumptions about what math is and how it can be used.

Suppose you order an extra large pizza at a restaurant. You have 7 total people including yourself in your party. You may order more food but initially you will have this first pizza, which you will need to cut into pieces, to serve each person a different portion.

When the pizza is served at the table, it is uncut. The waiter/server stands over the table ready to do long-division on this extra-large pizza, using long cuts from one end to the other, through the center. Before cutting, he asks you how you would like it to be cut, in order to satisfy yourself and each of your guests optimally.

You are a mathematician and you can’t simply use what you’ve learned in school. You develop upon it inorder to extend the math that our civilization can utilize. So you think critically about this situation and try to quickly find an optimal way to divide the pizza.

Observing the pizza closely, you notice it is not an exacting pie. It is larger on one side than the other, and it isn’t totally circular. Additionally, rotating the pizza in your mind, you notice that a cross section of the pizza would indicate different densities in slices, and that some parts would be thicker than others. The other guests, wanting to assist you, are looking mostly towards one side of the pizza, which has been more favorably supplied with favorite ingredients, and more cheese. Normal application of mathematics will not be adquate in this case, you know, because it will simply call for a division of the pizza with cuts, the placement of which are ignored, that will result in an “eyeballed” separation of the pie into 8 pieces. This would be achieved with four cuts across the pie, resulting in 8 semicircles. There is no clear center point in the pie, just a point where the cuts hopefully all cross each other, or nearly do.

It can be seen quickly, by an imaginative reader, that there are many problems in addition to the ones listed above. If one thinks about how one would make a pizza fair for children, one would find that there is no way to arrive at equality in the pizza long-division, and the kids will find many reasons to make you believe the cuts were unfair. The best you can do is decide for them, or make them feel satisfied that what they get is fair in other ways, or fair enough. Additional issues will be added shortly, but for now let’s consider that our normal ideas about how we would apply math are not really that critical, and do seem to have problems.

Consider that we have chosen a cutting technique that assumes we want
an even number of slices. This has assumed that not only will we divide,
we will divide the pie *evenly*. I will argue at a later time
that this assumption that there is really an even and an odd is not
particularly clear in a system of wanattaming. For now I will comment
that an even division of the pie, on social ideas, would be one that
does not have any additional unused pizza remaining, or a modulus or
remainder in regular math. If we were to have an additional slice, also,
we are suddenly going to run into the same issue again, of allocating
that slice, potentially for the seven people, who have not yet had
enough food to satisfy them. So once again, division seems like it needs
to be employed. But now consider, that slice is not a circle, and the
former assumptions about cutting with four long-divisions into 8 slices
won’t work as effectively. One would also be disinclined to cut that
remaining slice into 7 slices, not being used to such cuts, and also
because, the cuts are so small as to not be socially acceptable. The
result is that there is not a clear application of mathematics to the
division of the pizza.

That additional slice also reveals that this system would rely on fractions or decimals of the pizza. Now, every person did not get one slice, but received one and one-sevenths or ~0.1428… an awkward number. This readiness to have a decimal, and a zero, and a fraction is not really justified in a way, that makes it so that another method of even divions into 7 slices would not be better.

A system of wanattams here would call for the division of the pizza into the minimal number of slices that make sense given the social rules and the physical work to be performed. It seems here, that 7 slices is adequate, but 14 slices would work as well, if each person could get 2 slices, from different parts of the people in a way that allocates pizza resources more equally. Notice that simply cutting the pizza into 8 slices does not actually include allocation of resources in an equal way, yet the number 1 has been applied to each of the slices. The result is that the pieces are not really even the results of a division operation the preserves an equality relation, and that not one piece is really equal to the other, has not been used in any socially approved way of equally allocating resources, and instead is a reliance on school division and an easy method of cutting. A system of wanattams, however, would want to divide the pizza into 7 slices, in a way that is a proper application of math according to the needs of the situation, with the result that any 7 slices are also trully equal to each other. However, in this case we will find that we do not have determinate needs or requirements for cutting, and pizza allocation, so the result is that wanattams will still fail on the requirement that each pizza be equal, or really have a wannatam value of one applied.

There are two serious issues here with this pizza long division that the mathematician using wanattams needs to resolve. Firstly, there is the issue that the math used does not employ physics. The second is that which has already been explained, that the method of cutting must also satisfy each person eating the food on social grounds of fairness, with the result that each slice is equal on other grounds. It will be assumed, since Mattanaw is an expert in moral philosophy, that for now, the second social requirement will simply be that each person ostensibly approves of what they receive, and that a sufficient level of fairness for someone like Mattanaw is achieved (I.e. there is nobody compmlaining because they really seem close to what reasonable people would expect for social justice of allocating food). Instead, you, like Mattanaw, intend to focus your attention to the physics of the matter, in order to gain an equality amongs each piece.

Notice however, that calling each piece a piece of the pie is not
quite justified. When a piece is cut, and it is a wanattam, it will have
a name, and that name will be one. Once the one has been applied, it
will be satisfactory to refer to each slice as a singular piece. It is
possible to exit this way of thinking to call piecdes and subdivision of
the pie, but once the division has ocurred, it will be necessary that
each piece really is a one. Then all who speak about their pieces of the
pizza will be referring to the same thing, *namely*, and
*numerically*, even one-sevenths of the pie, divided along
physical bases, corresponding somewhat to social expectations of
allocation of resources.

So you look at this pie, and you think to yourself, how can I optimally cut this thing so it has about equal amounts of cheeze, crust, toppings, etc… with similar visually pleasing properties which can be defined in terms of colors and arrangement. You also want to weight the pizza, in order to estimate whether or not a particular cut really did arrive at what you anticipated were your requirements, meaning you will have to test the pie after it is divided to see if the division was right.

To keep conversation short, let’s simply say you arrive at what you think is equal in terms of wanattam assignment and division into seven equal portions, using physical properties, and you are able to communicate the need to divide the pizza along specific lines segments. Let’s say also, you look to your group, and they, showing signs of distrust in your judgement, hear you out on your plan, and vote unanimously to support your decision about how to cut. The Server/Waiter then cuts the pizza skillfully, despite having an inclination to cut with 4 strokes. 7 separate strokes cutting into the center of the pie, which was also roughly determined in a manner similar to center of gravity.

Now, everyone eats the pizza, and is very happy that none is leftover
to look at, to think about how it will be divided once everyone finishes
their pieces. There was no use of a fraction. There was no use of a
decimal. Each piece, though definitely unequal, was physically examined
to create rough conditions for equality, good enough to pretend to well
implement math, using the wanattaming appraoch. The slices were evenly
distributed to each guest, even though the number in base_{10}
would have been a prime odd number. In your mind, as you were going
through the allocation, were thinking in ones. You did not really think
there was any seven involved. You thought you were naming sections of
the pizza, pieces and that each would also be named one, and that there
would be

1111111 slices.

Notice also that the base_{10} seven was clearly translatable
to ones without any issue or loss of function. Notice that this
different application of math without assumptions arrived at a more
clear result, than any use of pizza long-divion that you or anyone you
know has ever used.

We have seen each of the earlier ideas about wanattams shown functional. There was no need for a decimal, or decimal places, and no need for fractions. There was no need for a unit smaller than 1.

We have also seen that each of the other topics of interest mentioned
were covered. Firstly, that social justifications were required in order
to determine which math was applicable, even for something as simple as
division. Arguably, division was not performed, but naming of the pizza.
What exactly ocurred was a matter of process, and not a matter of strict
application of division. We’ve seen that use of wanattams provides
clarity about naming, since before we would have individual slices that
arguably should not have the same name, for being so different. Instead,
having really closer equality, results in more acceptable naming, and
more acceptable use of numbers, since if 1 does not equal 1, then it is
clear the math has not been sufficiently well applied. In passing we saw
that physics was required for this approach, connecting naming,
application of ones, and physics to mathematics. We have also seen that
this system clearly translates between base_{10} and
base<1>, except that using 1111111 to denote the number of slices,
is more clear, because it requires less interpretation than 7, and no
assumptions that 1111111 cannot be even.

It will be found that this example provides many points that we can develop on later, and much will apply to all mathematics and physics, and linguistics/language, but also to our ideas abou what is fair and isn’t.

Since this approach seems more fair and clean in its division of slices into pieces that are satisfactory, arguably, it is more fair than any division of pizza to date has been. This would imply that food allocation of pizza, unless there was luck involved (i.e. 4 people and 8 very factory-like slices), was not really that equitable, and required one or another participant to feel unsatisfied.

Both systems would ultimately fail to provide a result that is totally considered socially acceptable or equitable, however, because like children, all would differ in how they analyze the subject according to their varying tastes and bodies. I did not consider body composition, or sex, or comfort levels about portions.

Even if wanattaming in pizza long-division cannot satisfy all such
that social-justice is achieved, it does actually resolve a number of
issues in the application of mathematics. And in any case, it extends
our coverage as to the feasibility of a system of wanattaming or
numbering with base_{1} instead of base_{10}.

*Written without edits in 57 minutes. Finished Saturday, July
30 ^{th}, 2022*,

*Started Sunday, August 21 ^{st}, 2022*,

Now that some basics have been covered on what wanattams are, and their feasibility for fulfilling various mathematical purposes, we will move forward here to substitute existing mathematical methods with methods using wanattams, to the same effect.

Recall earlier, that a Base_{10} system of numbering, or the
arabic decimal system that rounds at 10, as we now use it, is
translatable to a range of other base systems. In computing we use
octal, base64, binary (which is base2), and a number of other systems.
Color are represented as hexagesimal which is base16, to express base256
values. In base16 we have 0-9, then a-f. So F represents 16, before
founding to the next digit. There are many systems that are similar, and
arguably an infinite number, if rounding can always occur one number
past which was chosen previously.

What is important here is that this really does mean that all of math can be done representing everything in base1. Base10 is our way of conveniently writing ones. However, the system of wanattams goes further, to claim that other parts of our system may be eliminated, like decimals, fractions and zeros. Above it was alredy shown how we can avoid decimals, fractions and zeros for many regular things we do with math. Arguably most business interactions do not require a decimal, a fraction or a zero. Additionally, it is argued negative numbers are not required.

These are additional statements which require demonstration, however. Base1 numbers can be used in math for everything where base10 was used. The intuitive proof of this is translatability of baseN numbers, which was used already, to justify using base10. Not recognizing that Base1 can be used, and requiring proof to justify its use, woudl require instead that base10 would require proof, given that all numbering requires ones, but does not require rounding.

In order to make the system of wanattams acceptible to the reader and mathematician, it will have to encroach on mathematics, by redefining or replacing it, wherever it can fulfill the same objectives, which seem proven in practice. In some places, it is anticipated, that what was proven in practice was really traditional, and can be corrected with wanattams. In other words, this procedure will reeal things that are incorrect about math, and merely assumed to be true for long periods of use. But nevertheless, it is understood that with removals of elements o fmath that are accepted, it must be shown that work can be conducted with similar or better results.

Recently I was thinking about the use of exponents as a good area for re-examination. An exponent is a shorthand way of expressing a multiplication. A multiplication is a short hand expression of an addition. Exponents have, we are told, an inverse operation, of rooting. For a squaring of a number, we hae an exponent that is two. It is written for example as:

x^{2}

where X is taken to be, here to simplify, a rational number.

Oddly we use a history of squares in multiplying various numbers and this is something I will henceforth avoid, because it seems to have many historical defects. Firstly, if one goes up in numbers, suddenly there are no visuals to utilize. Shis makes squaring a bit odd in that it only occurs early in the series, and the infinity of other exponents do not have a visual word related. We also have cubing, but after 3, we don’t know what to say any longer, and arguably, shouldn’t say “square” or “cube”. There is no geometrical component in what we are doing at present. What we are doing is multiplying two numbers together, one time. I.e. we have two of the number, multiplied.

This sort of thing might seem uncomfortable to the reader. But have
you noticed that past cubing, past x^{3} it starts to feel odd
and unfamiliar? This way of thinking blocks you from progressing, and
you get stuck wondering what to call an x^{5}. Perhaps some kids
think it is not important to name these other values, and that they can
instead ignore the progression all the way to any number which can be
included in the exponent.

X^{2} is X*X, or XX.

Filled with numbers, using a five as an example, we get 5^2 or 5*5 or 5(5), which is 25.

As addition, this is equivalent to five fives, or 5+5+5+5+5 which is 25.

Going the other direction we have square roots, for exponents of two. Then we have cubed roots, for three. Then again, we pretend that anything unnamed is unimportant, but this rooting goes as long as we like too.

For present, however, let’s say we want to find what x^{2}
would result in 25. We don’t know it’s five yet. So we take the square
root, or two-root of 25. What do you think the answer is?

Well we say it’s five, but it is not only five! It is also negative five, which is strange. For this in school if we think back we’ll recall, that for the square root of 25, we would have plus/minus 5 as our answer, which measn there are two answers and not one, and one is negative.

The weirdness in this I think is a defect of mathematics. Notice in wanattams there are no negatives at all. What this would imply is that if we had a rooting procedure, we would have a single answer for the two-root of 25. It’s just five. Going the other way, five squared is only 25. There is no squaring of any negative five because that doesn’t exist.

Aren’t negatives a little strange? Wasn’t there always something a little weird about negatives?

Consider if someone says you have negative 100 dollars. If you check
for anything related to money, you would find you don’t have any. Not
that there is ghost money, to be filled in with dollars, which would
result in no money. You have no money now! In what sense is your
no-money negative anything in particular. You simply have no money and
it was always weird to think, that you can *have* anything
negative.

Then someone shows up and says, well, you are in debt. This means that we think “Hey, you have no money in your purse, or wallet, and if we say you take 100 more from that, it’s negative 100.” My claim is that this is not representing reality any longer in the way we pretend it does. In my opinion, what is happening is you have no money, but if you count out, what is thought to be owed to the debtor, they want 100 dollars from you, to be satisfied. They track this on their side. If they forgive it, you didn’t magically get 100 back and arrive at zero. Your condition regarding cash is identical.

You can use only positive numbers, or numbers, to express this completely. You can never have negative dollars, in this system.

Notice that wanattams fixed this odd situation that would confuse children. You only have positive values to work with. When you add five fives, you get 25. When you root it, you don’t end up with a negative answer, you simply went the other way back to five.

Debts are positive values and not negative values.

Going further though, into wanattam land, we express fives not with a decimal 5 digit. That would be confusing too. It leads people to numerology and superstition concerning fives. A five does not have a life of its own. It is just a choice to have 10 numbers because we have ten fingers and toes. What we don’t have a choice about is counting.

So when we start with five we have:

11111 (“one-one-one-one-one”).

Later we will summarize this easily again, without forgetting they are all ones. So as absurd as it seems now in some ways, we will really get the above benefits, and still, be able to succintly say and expresss all of these numbers, even when they get huge.

Let’s humorously do cubing of this number. But instead, let’s remove all exponents too. We’re going to say there are:

11111 (“one-one-one-one-one”), 11111s (“one-one-one-one-ones”),

written as:

11111 11111 11111 11111 11111

I included spaces to make it easier to read. It is still easy to read, like dollars right? When huge we need a better way though. I also do not prefer to divide into fives, but into numbers like 8. It will be shown how this can be done later, and I admittedly have some work to do, to get that into a place that is more acceptible to my expectations. Nevertheless, it is workable. What is funny, is that is all base10 ever was! It was only workable.

So objections to my idea of using eights can be easily turned against the reader. Do you not know you did this the whole time? What are the errors of your doing this?

I will show errors as we go through, and most will be unknown to the reader, who would have benefitted from knowing about the problems trying to understand math as a child.

Consider again, that negative numbers, don’t seem to make sense, even
for the *easiest* things we do with them.

Physicists will claim that mathematics represent reality, but then when talking about negative money, it is not clear what that reality is. If mathematics is going to represent reality, it should do it for elementary aged kids, and not only physicists, who we are trusting only for the most part.

Let us now take the root of

11111 11111 11111 11111 11111

which is saying, if we were going to add a number that same number of times, and get this result, what would it be?

Already this is confusing, and we say math is easy. But what that means is:

11111

added that same number of times

is

11111 11111 11111 11111 11111

which means the root of that last number is the first, or

11111

let’s do the “cube” of three, which is really the third exponent of three, or three times three times three.

111

is used to

111 * 111 first,

then multiply that result again by 111, or add it together three more times.

111 111 111 is the first result (nine)

111 111 111 111 111 111 111 111 111

did it three times, for 27.

111 * 111 * 111 is

111 111 111 111 111 111 111 111 111

Now let’s root it cube-like. That means, we are wanting a number, that if we added it, it many times, then took that result and added it, it many times again, we’d have the number above. Well, we already did that, so that’s:

This is easy to work out the way we are doing it, notice.

111 * 111 simply lists the first number that many times:

111 111 111

Then we do that again, but I did it vertically:

111 111 111 111 111 111 111 111 111

Now let’s do a fourth power and fourth root of 11, in a series (showing our work).

11,

11 11,

11 11 11 11,

11 11 11 11 11 11 11 11

11 11 11 11 11 11 11 11

In this we can see the fourth root and the fourth power. We can translate this to 2 and 16.

Wouldn’t it be strange, if we took the fourth root of that last number, and said there was a negative 2 as an answer?

One issue I was thinking of is that we handle answers to equations
with exponents using graphs. Going back to that x^{2}, graphing
answers out, we have what we’ve seen in school as y = x^{2}
which is a parabola. It is like a cup shape showing answers on the
negative and positive side of the y axis. This is accepted in school,
but what I have just said ealier is that there are no negatives. This
means in my system of wanattams, I have to explain to people expecting
that cup shape. Instead of that cup shape, I only want the right side of
the graph, with half the cup, going upwards. This still makes more sense
because, what are these negative dollars, and negative roots? Notice
oddly, the negatives are only for x values and not for y values. In my
system, there isn’t a negative x or a negative y. Oddly, what this means
is there are no quadrants at all on our graphs that we recall being
taught. Recall ther are four quadrants. They are numbered one through
four, clockwise, beginning with the first, which is positive positive,
for x and y values. Everything is positive in that quadrant. In the
next, quadrant two, we have positive x and negative y (down and to the
right), In quadrant three it is all negative, negative x and y, going
down and the the left. Left is wrong and negative, and so is down? That
seems a little strange to me too, indicating it isn’t correct. Quadrant
four is postive y and negative x, going left and up.

Imagine though, if we said there was only one quadrant, and it’s the first one. When we draw all four quadrants, we use a box. We draw in the middle a line going up and down vertically, and another line going right to left horizontally. This is like a cross that creates the quadrants. But when we draw the cross, and all quadrants, we still do it all in a box. Let’s just say, we grow quadrant one, to be the entire box, and now there is no cross in the middle. For now, let’s think it is like a normal bar graph like drawing. We have the up and down y axis going up and down, and another line at the bottom, going left to right, which is the x axis. But now everything is like that one quadrant.

Isn’t it kinda strange, that in all our business graphs and charts, we are kinda already doing that? We put everything up and to the right of these two lines, which are on the left side and bottom of the graph. Let’s draw this quickly:

(y) | | | | | | | | |________________________________ (x)

It looks like this. This is like one quarter of the coordinate system, but we use just this upper portion all the time, sort of like what I’m saying we can do with wanattams.

The older system looks like this:

Q4-+ Q1++ (y) | | | | | | _____________________________(x) | | | | | | | Q3-- Q2+-

Doesn’t the upper graph make it appear we are only using quadrant 1 for everything in business?

Similarly, I think the graph of y = x^{2} should only show
answers that are in quadrant 1, and that approach would really show the
same results as what we were just doing with our ones.

What has to be determined, is if there are any serious mathematical ramifications, working with exponents using rules of exponents we’ve adopted, that would break, and not be replaceable. I don’t care if things break, if what is gained are:

- Clarity,
- A way that seems much more teachable,
- More clearly modeling reality,
- That breaks which can also be replaced, OR
- breaks which should be abandoned.

For what we were just doing above, I think every one of these benefits seems to have been gotten. We don’t have negative money. Check: we are modeling reality better. Check: it is more teachable. Check: it breaks negative square roots, but so far, we are not using those, and we also replaced it. Check: it appears we have broken use of the graph where we were graphing which does not exist, so it is also breaking which should be abandoned.

However, digging deeper into the system of exponents, we may find uses which cannot be replaced or would be broken. This is something a mathematician might claim. This is not because we are using base1 numbers, but because of what else I’m doing to create wanattams, like removing negatives, and zero, and fractions and decimals. So far it is stil working, however, and was working earlier, removing decimals and fractional money. It appears we have not seen a counterargument yet to this practice.

I will remind the reader, that we will also use wanattams to explain logic, and logical operations and connectives, and so benefits are already known to exist using wanattams outside of math in the field of logic and computer science. this relates to the article also below on Gödel, which may be of interest to some readers. This is not merely play although the work done here is playful. It will get more technical and more seriously connect with the foundations of mathematics and logic, and naming and words in language.

Next we will work on circles an Pi, and see what might not be especially useful in modeling reality, in having a long stream of decimal digits that people strive to memorize seemingly uselessly, and of doing the same work more clearly and simply with wanattams.

Does Pi seem odd and confusing, or strange?

Notice that proponents of love of the beauty of pi do not really have great reasons for it. Yes, it appears that working with Pi, we see that the next digits seem unpredictable, and random in a way. What can be said for the love of a number which is hard to know, becasue the next number in the decimals that have not been memorized, are not really predictable and must be computed, and appear random. “Wow, I mean… wow, right?” Well, consider if you memorized it to 1000 places. How random would each of the digits be to that one thousandths place? It would seem, known and not random, kinda right? Also, the number has been denoted with a special symbol which is not a number. I do not find that acceptable in my system of wanattams. I’m not impressed with long decimals, and feeling a need to have a new symbol. It does not seem elegant at all, and rather seems inelegant. Pi written out seems sloppy.

“Pi is sloppy.”

What is Pi exactly?

First, let’s stop calling it a special name and refer to it as a computation resulting roughly in 3.14 at low precision.

Why is it a decimal and is it a decimal?

It appears it is not, it is simply a ratio, when computed, results in a mixed number (mixed numbers are weird too, will deal with that later), of 3 and a remainder. Three is somewhat interesting and elegant to me, being 111, very clearly. But the remainder that is not known is sloppy. There is a .14 and many other irrational digits we are told. This number, is an irrational number. That also indicates something is strange. I also don’t agree with the naming of various number systems, which do not appear to have to be named as separate systems. It seems to imply that numbering is not agreed upon, and in that case, this paper should be welcomed.

So pi is really a computation result that is a mixed number with a remainder. That happens when we do division.

So the number is a result of a division?

Yes, and oddly we also say that that is a ratio, as if a ratio is an entity, that is not two numbers being compared, avoiding computation. Yes a ratio is like a preparation for doing some division but without ever doing it. That is also what a fraction is.

“I’m refusing to divide.”

But dividing is also kinda:

“I’m going to do this math stuff when instead I’m done when I just write the two numbers in relation, kinda.”

1/3 is done in a way.

.3333 feels sloppy, because you decided:

“I must do this division thing.”

Pi really is, even with its fancy symbol, just that:

“I must divide now.”

But what is the ratio. Is the ratio also sloppy? Do we need either of them?

What could we do instead of this with wanattams, and what would be gained, and what might get broken? What would be unrecoverable if broken, and not better even if fixed?

Consider, before we move to the next topic, something of interest: Have you ever seen a circle in nature? And ask yourself, is your love of pi somehow connected with pie, and circles which supposedly exist, from things you’ve done in school. There really are no circles at all, in anything physical that you’ve noticed. Computationsof elliptical curves which are like ovals, of all that stuff in space, which is a lot, does seem to have some relationsihp to computations of circles, even if circles seem to not exist themselves precisely (maybe). So breaking pi, may have ramifications for Isaac Newton and physicists, in how they do things, and while what they do doesn’t always appear on earth in our ecosystem, it is all over the universe, and so, will need some explanation. That being said, I’m pretty certain all they do can be replaced with wanattams entirely. That will relate, later, to the fact that they are never really precise, to a level of prcision that cannot also be handled by wanattams, but more importantly, they rely on computing systems, that will be seen to rely on wanattams, done better than how they are done now, but also done in a translatable way from what is done now.

…

Pi is defined as the ratio of Circumference to Diameter, which is a ratio which, like the decimal expression, cannot be written to complete precision. An approximate ratio is 22/7. When you feel you must divide this ratio, you arrive at ~3.1428, which is recognizably shorthand for the number π, which has not yet been expressed.

π appears to be the name for a rough result of a division of an increasingly precise comparison (ratio). It appears no person, and no computer, has been able to tell us, what π is. This is not elegant.

“I’m in a fog on π.”

Is not elegance.

“It’s so elegant I named a pattern I can’t recognize.”

(To be extremely sarcastic about it).

How might we replace this with wanattams.

Wanattams is not merely a commitment to a base1 numbering system which could use the same ratio and the same decimal result of dividing that ratio.

Here what is suggested is to use a minimum unit of measure in which some measurement must be used, on a physical observation of something involving circles. Notice this approach is very similar to what was done earlier with appliction of wanattams to the division of a pizza, into seven even units. First we have to consider what we are doing. That something resembles a circle does not imply it is a circle or that we can immediately model it as a circle, and start applying this method to finding solutions. When we were dividing the pie, we noticed it was not actually a circle, but something with a roundness that we can relate to circles that we might draw.

If we were to calculate the area of the pizza we were discussing earlier, or the circumference, we would not use pi.

So when would we use pi exactly? Would we use it when we draw pictures of circles? Would we use it when we make pies, that seem to be very closely approximating what we could draw, using a compass?

Interestingly, if a pizza looked like it was very close to a circle we would draw, with a compass, we would think we could start to use circles and circle related geometry to divide it and calculate the area and circumference.

But notice we forgot a better model would be a cylinder for a pizza, because it has depth. Oops, we were thinking a model using a circle would really model the pizza?

It does not appear, that using circle related geometry is really useful for this application, and that pi might be a misnomer. Pi does not model pies well at all.

Notice that if you knew nothing at all about geometry, you can still make pizza. You can round a dough, and apply toppings. Afterwards, you can imitate the cuts others have made, with tools which seem necessary but arent, like that rounded pizza cutter, and cut out 8 pieces easily, but not 7 even slices, as was shown earlir.

It doesn’t appear that use of circles and 22/7 and 3.14 has any really useful purpose here.

But what about another area of interest, in physics or other? Circles are hard to find in nature, but we can draw circles. Drawing circles is something we did get used to doing in other fields. Suddenly we want round things, which are clearly artifacts of our own creation, because they appear so unnatural being round. Yet by my other writing, I have stated that we are natural and what we create is still a natural creation, and so pipes in plumbing, and circles we draw, really exist. But we can see there is a division in that there is no expectation to see artifacts of circles in nature, unless those creations, are temporary artifacts we have constructed.

Notice that if we build things, our designs would include things like circles, which are not analogous to nature, which implies that based on how we do things, being the only animals that design things, that designs weren’t apparently used for natural things, we didn’t make. This would confine all designs to natural things that humans have created for themselves, but not everything else, from which they originated and where they found themselves originally.

Let us focus on human creations of circles. Human creations of circles in manufacture, are getting increasingly accurate. I have stated in the article on abandonging equality in the social world, that however accurate we get, they are still unequal to each other at high precision. In other words, at the highest level of precision of circle drawing, there are surface and line irregularities, relating to uncontrollable configurations of atoms. Using microscopy one would look at the surface of two very precise circles and see that their edges have roughnesses that do not match. We cannot draw the same circle twice, and this implies we did not know the circle we would draw once.

“We did not know the circle we would draw once.”

So even at the highest level of precision of drawing circles, pi is not defined. Not only that, we did not confirmably draw a circle. Instead what we have done, is we drew something which had a ratio which was as precise, or more precise, than that which we would want, probably aesthetically, and also if in manufacture, for function and for quality, to call it a precise circle.

Notice also that there is no infinity involved because we get down to a final level of atomic considerations, where there are discrete pieces of things, joined together, with gaps, and no smooth continuity like one would expect in a circle. Oddly, what results really is not like a circle and so again at the greatest level of detail, it is not altogether circle like, and does not have any perfection in being a circle, or any infinite precision which can be approached. Rather a finite level of precision is reached and at that finite precision there are irregularities.

The implication is that pi is not needed at the most accurate level, and instead, we can entirely use wanattams. Wanattams are intended to be those units of base1 which are at the lowest level of accuracy and precision for measing and representing things. Being also associated with naming correctly, things which are nearly identical with one another, atoms, being interchangeable, can all be assigned ones. If it is ever instrumental enough to go beneath the atomic level for measuring things like wannabe-circles craeted by us, then we can apply wanattams to those and have a yet smaller unit to rely on, for precise measures. At the final level, if we were to reach it, which would join various parts of physics together, with a more basic unit which would be usable for all sorts of measures, with no decimal places, at highest precision, we would have a wanattam that is also a mattanaw. Mattanaws are yet to be identified for this purpose, but would be maximally precise wanattams that would be unarbitrary in nature, binding physics to mathematics, completing the joining of the two disciplines into a single discipline.

Back to our manufacture of exacting circlular things. If we manufactured something that was a very exacting circle down to the atomic level, where irregularities were revealed, we could measure the circle using a ratio, more precise than 22/7, but still not so precise as the number we don’t know that we call pi. This means we don’t need pi at all. Rather, we simply need the ratio that at the highest precision is where we leave circleness, or where we’ve gotten as exact as we can estimate, for our irregular circle. We would have a margin of error in this circle. The irregularities would need to be accounted for, to know the area. Arguably, there would be less interest in area, as the precision to the atomic level became known, becasue at that level, there are discontinuities in area, and it would not be totally clear what is an area and what is not.

What if instead of using 22/7, we simply measured the area using tools, which did the work with approximation, seeing the whole, and breaking it into discrete units of work on area, that include all the irregularities? In this way, we can compute areas of irregular circles of all kinds.

“And with that we see that 22/7 and pi are only useful when it approaches a circle, and not at any other time.”

Which means pi is mostly useless by comparison to a tool that doesn’t use pi at all, for range of applications!

What is the very best method of measuring circles, versus creating them? Creating circles seems to be an area where you know already you want a circle. And so skipping all irregular circles, you opt instead, to create something that is quite circle like, even though, you know still, at high precision it will be irregular and inconsistent. For this, you will rely certainly on machines and computers. Using the computer, you will write a program that has stored in it some method of calculation that assumes pi, and does something somewhat unknown, that is still sitting atop computer hardware which is based on logic, which we will see later, uses something like wanattams and something less like math (see the article below on my Review of the Incompleteness Theorem). Not only this, the computer itself does not have great precision as you might think. It can only store numbers of a certain size. As I stated this means that your computer does not know pi. Since your computer does not know pi, what it will do is use a number that is a really inaccurate and far from infinite version of pi. The software may use a “long” which is a type of memory allocation or structure related to a more precise representation of a number. It is used in computations which are expected to be more precise. Floating point arithmetic with a storage of digits in the longest storage method for the computer can be used, so long as the computations that result, don’t take too long for the computer to do. So you want your computer to draw out a pipe of maximum circular precision, but you need the pipe to be cut by the machine next week, and not a million years from now. Pi expects forever, actually. It expects you will finish the pipe in the afterlife, where you might think you have that amount of time. But you don’t!

“We don’t have enough time for pi in the afterlife.”

The result, in a week, however, is something that is really quite precisely circular, according to our increasingly exacting expectations in drawing circles from youth, from when we could use a compass poorly, to when we could use it well. Using it better, we start to think, there must be a form which exists outside of reality which is the very greatest circle, that really has smooth edges, not built of atoms, discretely joined with chemical bonds. But then, thinking nature “perfect”, these same thinkers confusingly forget, that irregular circles would be their very greatest embodiment of circles, even drawn like a professional who is “best” might draw it.

The result we see here then, is that even though the very best circles are ones we create, we don’t know pi, and we can’t exceed what is given by substances in nature. Our very best circle will not be composed of imaginary things, but of things we draw using. Now we are physicists, thinking about lines that are ink made of molecules, and pipes taht are circles, comprised of iron or steel, or copper, or low value alloys we are told are copper.

The result is there are no circles at all in nature, and there is no Pi, and calculating the ratio to infinity is inapplicable to nature.

Wanattams and the approach of using wanattams is uniquely useful for illuminating the way to this conclusion and for creating a realism, giving us a way to model things more accurately and not with too much lofty imagination. Here let us complain of our early dislike and distrust of infinity.

“Infinities do not exist, and we never liked them.”

“Infinities were used excessively by romantics, people who were not being mathematical in their wants…”

Thinking like children again, who really did see problems in the world that they later gave up on disproving assupmtions (understandably, living only a short time, and having minds, knowing certain goals are out of reach), we can remember that we really don’t like these ideas that much and don’t see examples of them.

“This series goes on and on, forever.”

Speaking of infinity, here we will give up on:

“forever”

which also never felt comfortable, but appears to be the cause of our wanting to have an infinity. Notice infinity is a mathematical term, which is taken to be interchangeable with this lay term, which means, we have taken a strict mathematically technical word, and made it identical, with one we can’t define well at all. Infinity is better than forever and yet we give up on infinity too. So we will need to give up on forever.

This does not imply, however, that we know of beginnings and ends to things like the universe. If we give up on forever, and admit we have not experienced forever, it does not imply that there were beginnings and ends, that were briefer in duration to beginnings and ends, which are much longer that we don’t have access to.This means regarding the universe that we can’t tell the difference really between forever and something that has a beginning and an end. Does it stop or doesn’t it, in space and time? We don’t know. But that doesn’t answer anything for us one way or another regarding whether it can be the only example of forever we have or not. This implies we have no examples at present, but we also don’t know.

We do not have cirlces that reach forever pi. We have created a symbol for a number which does not exist. We cannot create circles that utilize pi, and we cannot find circles that nature created, or that nature created through us, using pi.

But we do have circles we created using math, using digits that can
be expressed in base1, at a level of precision instrumental to us (and
largely a minimal precision, relating to elements, or subatomic
particles); *and* we use something less precise than this, in
computer systems designed to compute in a reasonable amount of time,
what our machines can really build, because those machines also, are not
spitting out individual atoms, doing 3d printing. Perhaps in the future
they would, but even if they did, that would be near the maximum
precision, and it would not be totally exact. This implies wanattams are
sufficient for doing anything related to circles at the highest level of
precision, and we would have

“No fantasies about infinities or exacting curves, but instead would be closer to physics, using mattanaws.”

Mattanaws are reality based units of math. Having a maximum precision
for drawing circles is useful, and would illuminate teachings to
children, providing them information about the coarseness of substances
in reality, and the irregularities which exist even doing the best we
can do, creating things like circles. We would not have fantasies, about
math having extra “perfection” or being outside reality, providing
reality its design. Instead, we can look at nature and be informed about
what it has, and have a mathematics that is provably related to what we
see. Wanattams provide a system of numbering that is unarbitrary, and
natural. It is not possible to have numbering without ones. Being
translatable from decimals, all math can be represented in base one and
is already supposed to be a shorthand way of *writing* ones;
additionally, wanattams are intended to remove other arbitrarinesses,
being something required for a truly natural application and
representation. Wanattams then discard things we have invented which may
have harmful side effects, which seem to be instrumental, but are
unnecessary. Desimals are removed, and so are infinities, and so are
negative numbers, and so are fractions. The intention is to discard all
that we’ve created hastily, and accepted too quickly without
reconsideration. By such a removal it is hoped, and expected, that we
will bind physics to mathematics, and make mathematics clearer, and more
natural, with less aspects that would cause issues with children, who
detect errors but are then expected to never know what those errors
were, or know that they might be errors which have not yet been
fixed.

Suppose now you were wanting to draw a circle at high precision. Did you ever need math for that?

It appears that to draw the best circles you can, you never would need math, and not only that, it would not help you, unless you were programming a machine to do it for you. If you draw the circle yourself, you will use a string with a point, and simply guide a drawing implement around it pulling on the string, or you will use something equivalent to that, like a compass, which is something hard that has a fixed point, in which a drawing implement can draw around it.

The very best circle you would ever draw would be about controlling your hands, using a tool, making a line around a point in the middle.

What about computing the area of a circle?

This assumeds you have drawn a circle or have one that is supposedly an exemplar. Here, admittedly, you would resort to math probably, unless you covered it with some quantity of material, and measured the differenc e in weight, of the material used versus the material discared, which would give you a ratio, which you could use like pi. You could even worship this ratio, as being closer to atomic weight and elements. Done well you could use it forever from one single time doing.

But more likely, you would measure the diameter and get the radius or get the radius, somewhat more poorly than the diameter. Notice you’d measure by hand and would “eyeball” to a level of precision, which is probably to the 1/16th inch, oddly if you are in the United States. If elsewhere, you’d use a millimeter. These are not very exacting.

Not being a physicist, you would not know which accuracy of pi to use
correspoinding to the level of detail in your measuring stick measure.
But knowing you already are not exact, you’d probably use pi as 3.14,
which is close enough, and you would computer 3.14r^{2} which
is, as we stated before, not using pi, and terefore not using an
ultimate circle. It is using a rough circle.

Using this method, you arrive at an answer that is minimally using pi. You almost could just use 3 instead of pi, which is pi still, at even less precision.

“Have yo considered that 3 is pi?”

Stating that 3.14 is pi, compared to a high level of precision with thousands of decimal places, is even worse than claiming 3 is pi, compared to 3.14. Therefore, for your purposes you can really think of 3 as rough pi.

This makes the infinite accuracy of pi seem even more obscure and remote, and sloppy in a way. Three is very elegant, and perhaps makes usable circles for nearly any application in which you could require circles.

In a system of wanattams in which the precision you are accustomed to is the standard, we can say:

111 is pi.

But if we wanted to be more precise, not really needing, we could use:

111 111 111 111 111 111 111 11

in comparison with

111 111 1

which is similar to the ration 22/7, but without a ratio, and without an odd urge to divide.

Notice that using three in order to compute our earlier circle, with
3pir^{2}, we did not divide anything. We used the result of a
division though, but discarded the extra digits, and rounded down, which
is always done with pi.

Consider how you would ever draw a circle, or create a circle, whether you would need pi. I can think a few normal ways you would construct circular objects, not using pi, or kinda using pi.

- You bore a hole using a straight object, like a stick, like a bow drill, and the result appears circular.
- You, again, draw a circle, using a compass, or a circle with a centerpoint held.
- You print a circle, using a program which has a stored pi, or a number you input, on memory or on reference, for a reasonable value of pi, probably not 3 but near 3.

I will for now remove the third option, as being a case where a pseudo-pi (which is every pi), is used unreflectively. We will see if we can do this insted with wanattams, using an alternative approach.

For the first two, we did not use any ratio or number. If we needed a circle which was a certain size, we may have measured the string, or the distance in the points of the compass. But we did not have PI. All we had was a single length. After having that single length, we mechanically rotated a drawing implement about the point, to have a circular line.

Doing it this way, it is not clear which precision of pi we would have achieved. Perhaps, doing it very well, like the architects and engineers of the pyramids, and things within them, supposedly did. Using a very careful manual technique, something really finely drawn resulted, without much wavering of the hand or implement. Maybe it took a very long time. In that way a mechanical approach could result in something that might be more precise, than what we would use with 3.

If you used a 3 to draw a circle, you would be .9554 percent accurate to 3.14, and probably you would have gotten an A for your drawing, if done in a school setting. With a margin of error of 4.46%, your lack of exactness probably was not detected visually.

Creation of exacting circles seem to be maximally well done if Egyptian a few thousand years ago, and less exacting if a person living today with a compass and string, and pseudo-pi at 3 to 3.14. This gives us a rough range of precision if we never use pi for anything for creating circles.

From the most precise circles hand drawn or drawn with implement, with no knowledge of math, achieving great precision nevertheless, we have a result which is better than all of nature, but is a part of what nature produces, through us, without math. We now have a natural circle in the world, which is well-done, and aesthetically pleasing, like Luxor, in Las Vegas, Nevada. We can then measure this circle. We can directly run a string around the outside and see what the measurement is, and we can run a string along the center, to two points of the circle, to find the distance across, making a diameter. We can then of course, compare the two, if that were at all interesting to us. Believing circles to resemble each other, no matter the size, from small to large, we take this really exacting circle, to be a sort of guide for all circles, and create a ratio of Circumference to Diameter. This means that for the math, we have first relied on non-math, and simply measured.

Wanattams can be plainly used to create extremely precise circles, going by radii alone, measuring only radii and ignoring circumferences. Want to draw a circle? Know the size by the radii you would use. Determine the minimum level of measure you will use, or the maximum precision. Perhaps you will use a ruler, with wanattams printed, as tenths of millimeters, or decimeters, and you will use magnification, to support your limited eyesight. (Some would hold your eyesight is not limited).

Notice that with our formula with pseudo-pi in it, we are using only the radius for area, and for circumference, also use only the radius, but express it as 2r, or the diameter. Only the radius is needed for an index of circle sizes, or a catalog of all circles, and gauges of circles, and there is no need at all for pi. But to compute the circumference or the area of the circle, without measuring those, we think we need pseudo-pi.

If you never use pi at high precision, it is questionable whether you should compute or use an index, in which higher precision was always attained. In order to use pi at higher precisions, one either has to find first the more complex ratio to compute it, or, use a resource which tells what it is. This means, to know a higher precision of pi, one must rely on a tool, and not on memory. This means that using pseudo-pi, or the formulas involving circles, does not mean one has somehow gotten a method which attains high precision without using an index or reference. Instead, one uses an index/reference and the formula, from memory.

Now notice that, using the radiius alone, and an index of scales on the radius, one depends on less, and may not actually require the index, if the circles desired for creation, are known by their radii, and nothing else. I.e., consider that all the circles you create or see you recognize by the radius alone.

Already this is simpler and more honest than pi. It connects with how circles originated, how they are drawn without math, and we utilize a measure, which is considered minimally fundamental to calculating circles. All creations of circles use the radius. Measurements of the circle use the radius, but not necessarily PI, if the circumference is measurable, and using the measure of the circumference, you would have the ratio, that would generate pi.

If you have a circle already, are you responsible for measuring the circumference? You don’t know yet how well it’s drawn. Why not measure the circumference of an existing circle, and the radius, and having both, you have all that is required.

Having a circle, and for some reason not having the circumference, you may want to know the circumference. But using the radius, you already know on history, from the index, what cirumferences result. Which is the same as trying to remember pi.

Pi was never really a number, but the result of a division, of circumference and diameter, which is the radius times two. Awkwardly there was this ratio, which is used to derive pi, but that ratio

“Is not known by anyone at high precision.”

The ratio is based on knowledge about measures of Circumference and radii. How does one compute pi without the ratio? How does one have the ratio, without first measuring (from other parts of math). However, it appears here that one only needs the radius, and if the circle exists, should measure the circle, for not knowing if it is a circle. Because all circles are:

“Irregular.”

One might try to take a shortcut and use pi, to find the circumference and area, only to find that, the circle is not a circle.

At this point it appears the simpler approach is to use wanattams at the expected level of precision, the radius, the circumference if it exists and can be measured, and memory or an index on the scale of circles on radii. This is simpler and has similar constraints to the existing approach of using the formulae we learned in school using pseudo-pi. It eliminates pi as something that exists. We know it does not exist from what was written previously. This implies

“elimination of pi is parsimonious, or more simple, or includes less false entities, than using wanattams.”

Pi is odd, and uses a misnomer. It is infinite, and yet we saw it cannot be. It uses decimals, which is strange. We never utilize it, but instead utilize a pseudo-pi, which we fail to admit is 3 in practice. We use three and claim it is pi. It is much simpler instead, to realize it does not exist. Notice that a central place in circles is related to radii. Radii may be used alone to create all circles, especially if we know circles by their radii. We may even recognize existing circles by radii.

“Circles only appear testable by radii.”

If you draw a bad circle, you will know it, if you use a method to trace it, that is the same as to draw it. You will see that the radii from the center, does not always touch the circumference. If you notice a circle is not a circle, you’ll test it with the radius, but

“You will never test it with pseudo-pi”

Imagine a hand-drawn circle is drawn by an artist, who claims they can draw perfect circles. You see that, like the Earth, it is not as good a result as some might say, judging using circles. You want to test. In order to test, you find various candidate centers. From the center, you find a radius to the outside. You then draw circles to see if they completely overlay the hand drawn circle made by the artist. You find that at two spots, where the circle seems flat, or protruded, that it is inside or outside a circle draw, using a radius. Using this test, and never pseudo-pi, you come to know, the circle is not correct. You drew a better circle.

From this we can see:

“Good circles are drawn with a radius, and bad circles are tested with a radius.”

It appears circles are not tested or created using pseudo-pi. If a circle exists, and pie is used, it is because some trust already is present, that it is a circle and not a non-circle, or a poorly drawn circle. A circle, poorly drawn, cannot use pi.

From this it appears, that wanattams using a precise basic measure of a radius, without any decimal places, would produce and test, and measure all circles.

Here it appears from all that was stated that wanattams provides a superior approach over the use of pi and can discard pi, which was the purpose of this writing.

For any thing in nature, which might be curved, which might seemingly be circular, or related to circles, ti must first be determined that it is circular, and to do this one might use radii and wanattams, and for that there would be no need at all for pseudo-pi.

More to be confirmed…

This section is a continuation of the prior section which is concerned with revising mathematics to use wanattams. It is a test of the extent of usefulness of an alternative approach to math, using base1 numbering, and certain omissions of mathematical conventions which seem superfluous and arbitrary to the author. The objective is to have a mathematics that is maximally unarbitrary and natural, which the author thinks will imply it is more related to physics, and logic, and natural language spoken by human beings.

There are fundamentals in mathematics which really do not seem justifiable, yet people have learned these methods and rules and eventually unreflectively took them for complete and definitive parts of the system, which would be somewhat timeless and maybe not require alteration or overhaul. One part of math which people assume, perhas not incorrectly, is that nature has real continuities. For example, time is taken to perhaps be continuous, if it exists, in that there is no smallest unit of time, and would be represented with a line graphically. The assumption is that there are no minimal time jumps, which could be the fundamental interval of time, or that it wouldn’t change between things observed. Similarly, and relatedly, movement is often assumed to be continuous. If you throw a ball, or fire a projectile, or watch the moon in the sky, it is believed that it covers all space between one location and the next, and does not also take steps along the way. We imagine motion being represented with lines, and curves, which we think to be smooth curves without jaggedness, or gaps, when we graph.

Using our memories thinking of the movement of thrown objects we imagine paths that are seemingly curved, but on very close inspection of our experience as it is, without anything added, we see a ball which is a discrete thing changing locations, and know from experience that it goes forward, up, and then later down, where it eventually stops. Using our memories, it seems we can somewhat, but strangely vaguely, think there is a curve in our retained fast perpections or more perceptual memory, which vanishes quickly. Thinking about this is uncomfortable because whoever you are are, it seems you really do see primarily an object in one location and then the next, and if you stare, you seem to see it changing, with the background, but you don’t perceive anything really obviously curved or continuous. Nevertheless the experience does have an aspect which feels like it supports continuity, because everything you see moving appears to transition from one location to the next without making jumps. If you cannot see it clearly or remember how it moves on its course, it does not make you believe it is jumping along the way. You really do feel it occupies all positions on its path, however small they might be, until it reaches its stopping location.

However, the author thinks these experiences do not really justify our inference that the underlying most basic operation of movement is continuous. We already know that our perceptions of solid objects seem to show continuous use of space, but we know now that no matter what we look at that might be solid, no matter how smooth it is in appearance, and consistently dense, with no space showing, it is really separated atoms holding together with bonds, and that these atoms themselves hardly use any space. What we experience that gives the appearance of continuity is a relationship of our brains and our perceptual system to objects on a macro scale, with errors included related to limitations on perception and the nervous system, and of what can be achieved by any nervous system that animals might have, that are a certain size.

So this projectile we considered moving through space in our memory, or an object thrown, already was not one really solid and cohesive thing; instead, what was moving through space, was a collection of bonded molecules and atoms, and it is not entirely unchanging on its course. Each atom has its own position and movement independent of the whole. The whole is not clearly understood, as having a true center in which to understand the movement. It is really a complex collection of independently moving smaller objects, which have mutual relationships, making the entire collection movable on certain other rules. But there is no single object which is to be understood entirely apart from the materials that comprise it. The craftsman making projectiles or balls would know this, and would know differences between materials, and the meaning regarding the movement through space, which means they are using information which already refers to the parts of the object, to understand movement of the whole thing, that others would be overlooking–parts which he/she might pay too much attention to, given their personal interests in movement of balls or projectiles.

This section is of mathematical interest but we see what we are doing here is paying attention to the physical aspects, or physics, which would allow us to confirm whether math ought to have curves, or if physics ought to have curves. Are there curves anyplace. A hypothesis that is being used throughout this experiment with wanattams is that math on paper is physics too, because whatever model is created, one should understand the materials used, and consider writing, even when abstract in line graphs, as having a natural analogy from the paper and ink to the things supposedly represented. There is a confusion, perhaps, that humans make, leaping from paper to the world. This leap may have something to do with what is thought about the paper, separately and later disconnected from the paper, and then applied or connected to physics, as if the paper were not itself really the cause of the comparison, or the true source of the analogy. This is not an argument for that thesis which here there is not the best place to share, but if we consider mathematics itself physics in how it is conducted even if separate from the physical instruments, then we may have a physical justification of mathematics which is something many were looking for. Some think that mathematics defines physics, or physics mathematics, but it could be a confusion in that mathematics was physics already, in ways which lent to the eventual finding of utility in it. Without there being a physics related component of mathematics, it would not have worked. Either way, many percieve there is a gap between the two that should not exist when it is more complete or more developed.

And here we recall that there are curves in math, and imagined curves in physics, but the author working with discrete wanattams is interested if computing systems, and a logical definitions of mathematics, and the unarbitrariness of counting in ones, would have implications that are contrary to our assumptions about continuities. Given that continuities do not confirmably exist, but is supported by the math, we think, that we have, and somewhat in our approach to physics, we know that we haven’t removed discontinuity as a hypothesis entirely yet. Furthermore, there are many reasons for thinking that this hypothesis is better than might appear at first, given we find discrete objects when we look at anything really closely, and that we are unable to look really closely at movement to confirm there are no steps or small shifts at the root, that are small enough to make it appear continuous, while really it is discontinuous. We also require the use of computing systems to see for us when we cannot, yet they always commit to looking often at an object but not continually, and so we know we must rely on a system to confirm for us what it cannot itself confirm,

Recently I have thought of some basic aspects of this question which appear to be supportive of my project to find a solution. Here are some of these points that I want to focus on as I continue this writing:

- For any point we have a commitment to an interval, and not a really precise location. This means all points represent lines, but we act as though they are precise enough to be about one spot.
- Points are square, or cubed, for any two or three dimensional coordinate system, and not circles. This supports the prior point because a cubical or square location must have sublocations. I stated a point is a line, with a width, but a line with a width might be considered a set of vertical and horizontal lines, forming a sequence of squares, or three dimensional sets of lines forming cubes. Either way points would occupy locations which have sublocations.
- Lines are unconfirmable in nature using our close perception, as not having gaps along the way, for any line one might think one has found. This includes drawn lines. This also includes natural lines or straightnesses on natural or manufactured objects. This also includes things which appear to move as if on lines, for the reasons included above already.
- Instruments take repeat measurements, but not continuous measurements.
- Computer systems repeatedly use instruments to take measurements and take repeated calculations, but do not take continuous calculations.
- Storage of memory and numbers in computers utilize chunks.
- There are no infinitesimal numbers that can be written by hand, or on the computer, or stored in a computer.
- Computers showing line graphs must rely on pixels to display lines, and sometimes pretend curves, not actually plotted or drawn with individual pixel coordinates, smoothed using graphical methods unrelated to the actual functions or equations.
- Presenting, or drawing curves on paper, use a method that might be akin to dragging. If you drag a pen, you did not really compute points along the way. But the result would falsely indicate that you did. If you were wanting to represent a function about motion of objects, your dragging the pen would lead you to think you’ve gotten a continuity depicted in the object. But that’s only a method of getting all the points down at once, or all the ink done, quickly. You did not consider if you should drag your pen. Dragging your pen with ink causes you to think you’ve made a right analogy in your line about the movement of objects, yet that is only because of the side effect of connected ink.
- If you drag a stick in fine soil, it appears you have an early drawing of a line, before pen and paper, But it has gone through grains of material, and even your line which has been drawn has obvious breakages along the way. This is lost in ink, because the ink is smooth. By this you can be confused in your analogy by your writing or drawing implements or medium. Drawing on sand, you might wonder if you’re representing an analogy to an object which really steps through space, in a way that still results in what is apparently a line, because while you do see a line “drawn in the sand” you also see the gaps along the way. You can increase or decrease these gaps in the line, by simply changing the fineness of the sand, and again, if you have a paint, which has only microscopic gaps, you would feel as if you see continuity with no steps. Looking at it again with a microscope, you may remind yourself, that your line did not represent all points on a path, considering the line as it is at the elemental or microscopic level. Seeing that line, you may not be so inclined, to think that motin is on smooth continuities.
- Wherever we can look closely enough, we find discretness.
- We have not looked closely enough at anything we think continuous, to see closely perhaps. We would not be able to say for sure, if we’ve ever seen a motion closely. If it is truly continuous, then it is thought that forever one could look closer, and so one is still very far from seeing it closely and arguably that is totally relative. However, if it is discrete at the root, then there is such a thing as looking at it closely, for finally seeing it at the most basic level. In that case, it is like seeing atoms finally, or seeing unicellular organisms. Looking more closely at unicellular organisms to what they are composed of does not make one feel one has not looked closely, when one finally saw the whole things living! But if it were argued that they could never be seen, because they are infinitely small, then one can never see them closely.
- Probabilistic or predictive confirmation of functions or equations representing curves or continuities in nature rely on “spot checks” on measures which still plot. These would be plotted atop a function which draws lines on assumptions or expectations with defects in media already mentioned. This means that probabilistic or predictive confirmations of models of phenomena with equations only do very little checking, compared with continuities which are presumed to exist. This means they check in discontinuous ways.
- Probabilistic or predictive confirmations utilize tools and computing, which can only check at a certai maximum precision, which means for each spot check, a point is found, but that point occupies space with a smaller interval which has not been checked. Even by calling it a point, or a specific location, one has assumed there is continuity, but then also claimed to be working with a discrete unit, which would be used to test a formula, which could build up a continuity. This really amounts however, to not checking more closely.
- Computer systems and instruments are limited in precision, however used.
- Computer systems and instruments do not apparently represent or simulate any continuities.
- Measurement tools always have a comparison implied, in which intervals are demarcated, which have start and end locations. For example a ruler that shows inches, or centimeters, is compared to an object which is measured. Lines on that ruler are used as guides. When one thinks one has measured an object to be 2.5 inches, or the equivalent number of centimeters, one has found that line which is closest too, but further than, the end of that object. This implies a rounding to the line. The line however has a thickness. This means all measures have rounding, or lack of precision, relating to intervals, and nearness locations to lines which have intervals, which underneath are assumed to cover space and have smaller potential measures, which were not used. Looking smaller it is not clear if there would be discontinuities found. The measurement tools though, being physical things, like wood for rulers, or metals, are known to have discontinuities and granularities saw more closely. Again this implies that a measuring implement that is discontinuous is used to try to precisely measure things which are thought to have discontinuities. It is not clear a discontinuous object or instrument, or system, or device, can ever be used to find something more precise than it can be; actually, it appears it cannot.
- More precise measuring tools still utilize spot checks and comparisons, which would have the same defects. Even lasers appear to utilize a medium which itself is discontinuous in that light may be understood as photons moving through space (something is moving through space which starts and ends, too). It is not understood if light moves on a path which is a continuous path, and itself has to be confirmed regarding the continuity of its motion. That is unconfirmed and that is the topic of this paper. However, the utilization of continuity of light as a measure would not be able to use unconfirmable continuities it might possess, meaning one is not looking that closely in any case, meaning the instruments using the laser or light, still do it with limitations related to those already stated.
- It is not clear that the continuous motion of any object may be compared to the motion of any other object to claim it too has continuity, if the supposedly continuous motion of the first object is not first confirmed. I do not yet know the full implications of this thought/intuition and leave it here temporarily unanalyzed to have it recorded. I will only state briefly that it appears we use objects which do not move to confirm movement and to believe they are moving continuously. For example, when we see a ball in a gymnasium moving, we compare its path to the background in the gym. Likewise, if we were to measure the movement of a ball to see if it is continuous, we might want to compare it on its path to a long ruler, to see if really, it traverses all spots on the ruler. This again is a fixed object for comparison. Knowing that a fixed object has gaps looked at very closely, it cannot finally do the work of confirming. This makes it seem we need to use something which is continuous to measure whether this moving thing is continuous in its motion. Well this appears to be only something we assume is continuous, which is another motion. And now I arrive at the intuition. We could record very rapidly, the motion of a continuous object, in a movie, that has very many frames, which can show it in incredible slow motion, and even shows it as stationary, when looking at the individual frames, one after another. One can then compare the motion of a slower moving object, or object which one is concerned about, for its continuity or discontinuity question. The faster moving object made a kind of ruler for the slower moving object. The faster object is the test of the slower. However, we have already seen that we have only assumed it is continuous, which was our motivation for using it as a guide. This implies that any measure whatsoever, including those that might be continuous, but aren’t confirmed to be continuous, cannot measure another object for continuity. This implies that one may need to have something continuous to measure anything to be continuous. There ismuch of interest here that is not at all promising for confirming continuity that will be discussed later. Rather it makes it seem it is not confirmable. If not confirmable, and we can only find discontinuities, and there are fundamental problems relating to whether the theory is refutable, it is unscientific, to accept continuity. Continuity would be an assumption and not an inference. This seems to provide very much material for thinking wanattams that are discrete and discontinuous are much better for understanding motion without assumptions, than assuming continuity.
- For any measure we make, we choose a smallest interval. It appears that smallest interval would imply we could use a wanattam, and no decimal places, or supposed subdivisions which are infinitely smaller, which would show continuity. Rather, the use is fundamentally discontinuous, and could use a smallest discontinuous interval measure (and it does). It appears then mattanaws are appropriate for representing motion.
- For any measuring implement we use, we have material, which is fundamentally discrete and has smallest units and intervals. For each of these, it appears that wanattams are appropriate for measure.
- If there are no continuities or no way to refute continuities, AND,
all methods of working with phenomena use discontinuities, or minimal
intervals of measure, it appears the most scientific approach is to
utilize discontinuities, and potentially wanattams. This would not be
incompatible with the reality being truly continuous with respect to
motion; rather, wanattams would be what we can use if we are scientific,
and if we have not yet shown that continuities exist. Rather, it keeps
continuity as a
*hypothesis*. - Mathematics appears to be the field most committed to continuity.
Physicists may be willing to treat it as a hypothesis. This leads one to
think something is amiss with mathematics, in its preservation of
assumptions, and its unwillingnes to take certain assumptions as
hypotheses. This may imply that mathematics really is opposed to
scientific technique. I would not equate methods of math with those of
science in how the work is performed, and in which conclusions are
protected. But I have said above, that creating a foundation in
mathematics in physics, one might show that math can be made scientific
later, and show that now, it is not largely. It’s techniques appear to
indicate it is not scientific. Some subpoints:
- Math does not utilize testing methodology, but intuition. Wheras, programming and algorithms in practice require testing, because of the limitations of the programmer, or mathemacian.
- Math appears to be a field that assumes the mathematician does not err, in all the ways that a programmer would assume errors are possible.
- Mathematics for testing requires programming. A recursive equation or equation which accepts many inputs and has many outputs will have a repeat recipe for “showing the work” or “doing the work”. This requires programming and computing languages, and computer system architecture, and physical apparatus. The main point is repeated testing is required and not optional.
- Mathematics utilizes other people to intuitively find tests to their work, which the authors may not have identified. It is not known by a mathematician, if their work is really complete, if they have not envisioned all the tests. Other minds are required to carefully review to try to critically find defects or areas where testing would be needed. The finding of defects would require tests too, to confirm whether what was found was defective or not. If computers are not used, tests are on paper, from other mathematicians. This is easily noticed to be a bad and inexhaustive and unformal and unsystematic approach at testing. It makes it appear that math has not found testing techniques which are acceptable or standard. What are the standards of testing in mathematics?

- Some in history of philosophy and physics and mathematics have claimed that things like infinite recursions, or not having a beginning and end, are really inelegant ways of thinking that seem unaesthetic. Some have rejected certain views about nature on the basis of this worldview, although it does not appear to have a complete foundation. Yet mathematicans will have to accept certain things, assuming continuity. Firstly, there will be an assumption that subdivision of measure is always possible, yet it appears that in nature, subdivision can happen many times, but ends. It appears from my perspective, that simply thinking you can subdivide a quantity, allows you to think you can subdivide the next. Without knowing where the end is, one can have a game of imagining, that one can keep doing it. I think maybe Mathematics has played a game of thinking subdivision can go on indefinitely, and that this relates to not having a chosen measure and no chosen thing to subdivide. It has pretended there is something which cannot have any end to its subdivision, as if there were a substance, metaphysical, which has no smallest unit, or smalles granule, or atomic beginning. Secondly, it appears that infinities were created at a much older period of time, when thinking about mathematics was perhaps more metaphysical, or separated from any observation of anything, and more in contemplation. I can imagine that if I get in a sailboat, that on the first and second day I will not see land, and so on the third day I will not see land, and so on the seven thousandth day, I will not see land, and the remainder of the earth is an unlimited ocean. But this appears to be due to repeating a imagined event again and again. Arguably, this could be done on one imagined day of the sailboat ride. Suppose you were in a sail boat for one day only, and never saw another day in a sailboat. You went unconscious after day one, but on day one, you were in open ocean. When you regained consciousness, you were on land. Later, you start to imagine how far the ocean might be across, and how long it would take to get across. You are limited to imagining one day on the water, but can pretend all else, including alternative weather and the like. So let’s say, you simply think to yourself, day one you are on open water, day two you are on open water with different weather, then you think some unlimited or undefined number of days happen, and one day after that, you are in open water with yet differet weather, On that series, you may think the ocean is forever in length, and that you cannot cross it, and that it can be counted in infinite days in sailboating distance. (Continue to the next related point).
- When we name an interval as a single unit, and claim we can subdivide that into ten, we have had an experience like this sailboat. On one experience we see that we do have a unit, and we see that, we can apparently divide it into ten. Notice that thinking like mathemicians do, or like this person on the sailboat, we think we can immediately intuit, that we can do that again. We can repeat it. And it seems we can, becasue when we imagine and zoom in on that next smaller unit, we think, ok it is like the first, we can divide it into tenths. And so mentally, we do it again (we never zoomed in on the object, we mentaly zoomed to the same object, we just imagined it fit the smaller space. If we really did zoom, we’d see more detail). So now we think, to ourseles, but now we have a smaller unit, we can divide it ten times. Then we realize, that it seems we can keep going with it. Then we pretend, we have a series in which we do the same some unlimited number of times, then state that the next one we can still do it. But all we ever did was the first mental exercise over and over. We never even did a second or a third.
- Suppose we have a wood ruler, of good quality, made of hickory. It has wood grain in it, and a small range of colors though, which makes us think it has a good consistancy. The wood grains are smooth and it is appealing, but they are curved, and touch a number of locations on the rulers edge, where the lines are, where measurments are taken. The grains are brownish, and the rest of the wood is a somewhat consistent yellow, but it too is not totally consistent. Even though the wood is strong and dense, you can see finer details that just go outside of your ability to see. You can see transitions and tiny divots, and changes of texture, that are visible, but then you feel like there are differences and transitions you can’t quite make out. It is uncomfortable. It is like looking at a fog of stars, thinking you see distinct stars, but then realizing you can’t. Or can you? There is something there you can’t make out, but if you can see smaller, under a microscope, you think, you will see variations. The ruler is one foot long. You see already, it is marked for subdivision into 12 inches. Then look at one inch. You then notice there are smaller units marked out, which are in 12s, oddly, and not 1/16ths or 1/8ths. So you have inches of inches, it appears, to you. You look more closely at your inch, and you notice though, the material is different in consistency from the whole foot. Fewer gains can fit in there. Then you look at the inch of the inch, and notice, even fewer gains fit in there, and you see its colors differ. It doesn’t have the same color as the whole. Looking closely, you wonder if looking at only a smaller portion, if it will all have the same yellow. If all the same yellow part will have the same grain types. Looking really closely, you also wonder if you are noticing a constancy of color of another yellow you can make out, that is between yellows that you were noticing earlier. Now thinking like a mathematican, you think you can divide the smaller inch into twelve more inches. You then look really closely, but now you are realizing what you are subdividing looks like it is getting maybe irregular a little. 2/12ths maybe would be in a divot. 1/12th looks like it has a tiny black speck you didn’t see at first (is that wood?). You can’t really see 12ths either, it is too small. You imagine you can vaguely see 1/12ths. You didn’t draw any lines. You don’t know how thin the lines would need to be, to make the 1/12ths very clear, like the rest of the ruler. But now it isn’t super smooth wood either, and you would rather have something more consistent, maybe something continuous. But now, thinking a mathematician, you think you can go further. Now, you wonder are you going to see different kinds of collections of cells, if you can really see? What is that lump there? What is that speck again? Will I have 2/12ths in the speck alone, and is it not wood? The mathematican pretends you can repeat the same mental event as the first division into 12ths, without substance. Maybe they imagine a foot, with 12 inches, with a white background (like a paper), with lines (which are familiar), that might be black ink. So it is pretended there is a substance, or distance, that is either totally consistent, or isn’t a substance. I think these are based on paper writing and experiences, like the sailboat. You imagine a clean paper surface, nice and white, and then you repeatedly think you can zoom in and get that same surface, as you divide into tenths over and over, and that the view never changes. Then you pretend you did it forever. and that still, after doing it forever, you could do it once more. That would correspond to an algebraic series, which supposedly represents, doing this right. But the series has the same error in it. The mathematican is as foolish as the sailboat person in reasoning. You, subdividing the ruler, see that you actually end up doing something different somewhat, as you go deeper and deeper into measuring the ruler (notice the ruler is self-measuring its material). You know you are going to get to a level in the ruler, where its composition has different characteristics, and you may reach fundamental units, and already you found irregularites. This is the measuring implement itself, which isn’t supposed to have gaps and irregularities, and divots, and a smallness in which it can’t measure anymore, and can’t have lines on it, and can’t be subdivided furhter, because atoms were reached. This appears also an error in thinking there are fundamental forms in math that nature corresponds to.
- The above utilizes an argument of induction where statistics was not performed. But statistics is using instances of experience, to infer next experiences. This is induction. The mathematical series does not appear to use induction, but may use a repetition, sometimes, of a first experience of sorts, that was physical, and not mathematical. This would explain why a mathematician thinks they use imagination, but there is no imagination, without experiences that’s physical. There is no imagination that’s not composed of perceptions that ocurred in experience, from childhood onwards. It would be interesting to know how blind and deaf mathematicians imagine their work, and it appears they would do it, using tactile imagination, and not vision or sound, or writing. Which means they would not imagine rules with white backgrounds, and lines of ink. They would not imagine series of numbers, in lines that show numbers in symbols. But if they did think they could do math with infinite series, they would probably feel raised repetitions of formulae, with sequencing, that pretends subsequents were not the same as the precedents. I think they imagine the same first over and over, or with a minor adjustment to the first, in the second and third (akin to the weather change by the sailor, who really only experienced a first sunny day).
- Mathematicians did not include how they utilize imagination, in proofs, and whether or not what they imagined included defects, which were then included in the conclusions. I.e., I imagined a continuous substance subdivided into tenths once, and that I could repeat it over and over, then wrote out an algebraic series, proving that division on measurement to smaller and subsequently smaller units is infinite. What was forgotten is that if imagination included perception in actual testing, it would include each of the supposed items in the series, not being identical with the first. The reality was richer than the imagination; it tests the imagination, if the imagination had the tests, and the perception with lots of measurement experience on the subdivision (which was never experienced at a certain point in the series, pretended to be absent from the series), they would not have imagined that they could forever subdivide. It could be on not having enough intelligence on visualization of experience that would lead to thinking that one exprience or less experience were a suitable replacement for more, and that using that or imagination does not lessen the imagination. However, imagining on all the experience would be a better imagination, than on much less. Here it seems there could be an awkward experience. Math may have attempts at extending imaginations which are very unreflective of the world, with very litte, and having little, focused, and perhaps abstract. In other words thinking of all measurements by visualization of a series of subdivions into tents algebraically, or using a white background in mind, with lines like a ruler, would be the least developed imagination. The most developed imagination would want to start thining through all the simulations in life in which it would apply or fail to apply, and being able to simulate that way, would do it, potentially, and find errors. Abstraction then can be less imaginative, and due to less imagination potentially, than not abstracting, or refusing to extend an abstraction too far. “This experience can’t be used for quite that much…” There is something useful in a blueprint design, of a ruler, which exludes details which are unwanted. However, a blueprint of a ruler does not allow one to say, that the same design is usable, at all scales, having seen no other scales. In fact, a good architect or engineer, designing in such a way, should know what the design is for, and it would not be for all scales and for all materials. This means the architect that thinks infinite use of the blueprint, is very confused, by commparison to the one who tests in mental simulations on richer experiences of the world. Such confusions can happen, keeping certain ways of thinking separate from others, however. So it is possible, that someone who can simulate well with rich imagination of experience, would briefly, fall into thinking, that abstraction ought to be kept separate, and therefore fail to see how imagination could be better employed in mathematics. If it is recalled, that what is imagined in the abstraction relates to an experince that is physical, it appears it can be seen that the thinking in math which is supposedly separate from physics is still based on physics that you have learned on regular human sensory experience. This would be physics akin to what is considered gaming physics. You are a test of games in their correspondence or representability of the natural world in the accuracy of the images in relation to objects (Do they look like the real objects), and also, behavior of things. A physicist is not better than an athlete, at testing a video game on the realism of athletics including all objects and movements. However, the physicist may have to find formulae and programming that eventually seems, to himself/herself, or testers, to be something someone would like to play. They later and gradually improve the realism and their tests of the realism is from their regular experience and not their training in physics. Interstingly though, expertise may result, such that a set of formulae and information about programming is known, which others do not know, which combines to create simulations, which show how people think sports or games really do work. But they are not as rich as reality, and the athletes would find differences. At present, though, this is not a necessary test, all games which have “Physics engines” do not really show to the user anything everyone can’t find defects with. This implies that all people really have, in the style of machine learning, acquired through experience, a very good model of sorts, in nervous system, of how the world behaves. It would be really strange if anyone had all the formulae, and math, that would represent the same. From this it can be seen that a scientist of physics knows very little compared to what they know on experience as an animal, given they could only slowly begin to recreate, with computers, what they already know, and they would definitely not succeed at getting it close, on all parts of experience. They could only be responsible for a small part of the game, in one area of specialization perhaps. This supports my view that people are not really scientists in an incredibly meaningful way related to their life-guidance, on physics in particular, because they would have to read papers from academic research to get only small points. But the scientist might be very smart, and notice patterns, and have learned as an advanced animal. This could lead a scientist to think that that is why they understand the world well, but that is not on science, but on intelligent learning on normal world eprience, which might include science, but does not at all depend on it, or apply it. Again, one reads a few papers and that does not do more than explain some small portions of life. An implication of this which supports my other views is that people aren’t scientists in the way they think they are. Imagine thinking you have moral superiority, because you understood a paper that touched on carbon emissions; whereas another person used regular natural reasoning, but extensively, on all they already knew or learned, which may have never been related to a scientific paper. That doesn’t mean it was not the correct usage of relevant facts which were otherwise gleaned in life, over many years or decades, doing things that had nothing obviously to do with climate science. In this case it’s like being a person smartly observing aright, doing all the things they do, then, finding those things from experience, and reasoning about them, to come to a good result. That person, could still read scientific papers and make use of them, so it is not incompatible with doing the same activities. But the person may not value being scientific, by becoming aligned with results or work of science, which is communicated in academic papers.

This writing has gone far away in another direction, but I’m not certain this information will not be unuseful to progress on the more specific goal, of determining if the world has continuities or not, or if either way, wanattams are better used, to represent supposed continuities as discontinuities, so long as continuities are not confirmed, or are not truly usable (since unused and asssumed). The foregoing conversation might support becoming more accurate, and avoiding assumptions in a new way of doing math.

There seems to be an issue which might cause difficulty, which is the degree that people can introspect on their process of thinking about math and physics, to trace the origins to earlier learnings, and to see how their real way of doing it appears to be, at a detail that the best introspector might have. People who are not strong at tracing thoughts or reasonsings on this might not be able to see how they really did it, and might not see the erros that they have taken from imagination into their conclusions. I think this is something everyone would err on, but over time we are improving on making less errors of this type.

*Started Sunday, August 21 ^{st}, 2022*,

Recently I have returned to reviewing Gödel’s incompleteness theorem, which is recorded in On Formally Undecidable Propositions of Principia Mathematica and Related Systems. This paper, in my estimation, is not one that is very cleanly prepared. Rather, I should say, it has many issues which contribute to its inaccessibility. Here I will record something which might clarify the work for others; however, the intention is to really clarify it for myself, so I can very clearly communicate what is true of his paper, and what the real implications might be.

I have divided my analysis of his work according to the following divisions, in the table of contents below.

- Summary of his work
- My approach to his same objective
- Gloassary & Definitions
- Axioms and Justifications
- Rules of Inference and Justifications
- Claims of Gödel
- Omissions, Sudden Introductions, Errata
- Dependencies
- My notes for research
- Questions and Criticisms

In this section I restate Gödel’s work in my own words.

There is a philosophical interest in Gödel’s work which seems to have a zeal which encourages various thinkers to claim conclusions and impmlications which are strictly not implied by the results of his work, which is mathematical. Gödel’s work, being mathematical, does apparently have intentions in the results and implications which are, really mathematical and logical. Some of these conclusions, where they really do connect with demonstrations and proofs, may have implications that extend into the philosophical. Of course, the interests of the original authors of related works are not only mathematical but philosophical too. Bertrand Russell himself is a philosopher greatly esteemed outside of mathematics. Popular works of various authors lead to thoughts about what Gödel’s paper might claim, which are not really sufficiently mathematical. It is to be recalled that each writer of the papers related to the present work of Gödel was done by accomplished mathematicians, having advanced degrees specificially in math.

Here I will focus on the work of Gödel and summarize his statements and development of his argument sticking to the mathematics and logic involved, without going to far astray to what his work might mean, to a popular audience wanting to relate things to popular philosophy. It is clear that doing so would result in a lack of clarity about what Gödel was doing and what his paper really says. The objective of the present author is to understand what he says and what his paper really implies, with a focus on the mathematics in the paper and the objectives of his colleagues where those objectives are logical and mathematical.

My own interests outside of this effort are like most others: I’m interested in life and philosophy and what the conclusions might imply about things that matter inside of mathematics, but more importantly, what might be conisered outside, in everyday life and in general knowledge.

“How might my general knowledge be altered by an understanding of this work, and what does it mean about my future plans?”

These considerations will be separate to those appearing in this section, but if I understand Gödel aright, withought embellishment and confusion, I will do better at answering such a question.

…

In this section, I use my own approach, which is informed by my history in modern computer science, to accomplish the same objectives or prove the same thesis which Gödel purports to have proven. In this section, my goal is to better understand Gödel’s thesis, but instead of using his proof, I use an alternative approach which is intended to arrive at a similar conclusion, or else fail to succeed, indicating that perhaps, the implications which others might draw from Gödel’s work, using his work alone, are not perhaps justifiable; or are justifiable.

In my understanding of the work being considered, the objective is to determine what may or may not be proven without serious consequences or side effects within a closed system of rules, about mathematics. The objectives of various logicians creating systems like Principia Mathematica relate to seeing whether or not mathematics may be more firmly defined by more basic rules and axioms of logic, which hopes that mathematics can be given a well understood and solid foundation. In a sense, the objective is finishing the foundations of mathematics, and defining all of mathematics in terms of the simpler rules and axioms existing in the foundations.

Here it must be acknowledged that later developments in computer science seem to indicate that general purpose computing, on the basis of a few axioms and rules of inference, which are really similar to those that appear in these systems, have changed much of life on the globe, fulfilling objectives of representing and simulating parts of life, and of representing and testing mathematics used in the sciences. The sciences, relying upon math, rely also on computing systems to carry out the mathematical operations. This is all achieved on a very simple foundation of logic which really was the outcome of work from Rusell and Whitehead and others.

There are interesting questions which relate then to Gödel’s work, which claims that there are serious flaws in systems, that resemble computer systems, in really representing or proving various mathematical statements, which exist outside of computing and outside of logic. There are also questions as to definability of certain mathematical statements, using computing systems. It appears in some places definition in systems like Principia Mathematica relate to provability, in that the generation of the mathematical formulae from underlying axioms and rules of inference, shoudl not be possible if the resulting formulae are not provable, using those same axioms and rules of inference which are supposed to be basic enough to be self-evident for proof. A computing system should not be able to generate or represent a mathematical statement unless that statement were already proven by the design of that computing system.

It is the opinion of the author, knowing much about the design of computer systems in architecture and their implementations in softare, that mathematical implementations have been mostly separated from the hardware implementations which really did rely on logical design. This means that the results of mathematical operations are more related to software implementations which probably approximate mathematics rather than define them, using the rules of logic and axioms the way that the hardware appears to do in some ways, and the way that Russell and Whitehead and others have tried to do on paper in their systems of logic.

Gödel’s thesis is that there are mathematical propoositions and formulae which, along with their denial, conjointly cannot be proven. In other words, the statements cannot be proven true, but also, cannot be proven false. These propositions then have an indeterminate status with respect to their correctness and truth. They are found unprovable both for their assertion and denial, and are said to be undecidable. Undecidable is taken to mean “If one claims that this proposition is true, and another claims that it is false, neither can be satisfied.” It cannot be proven one way or another.

Gödel does not work towards showing that one specific example faults the approach, but rather that there would be many instances becasue of the faultiness of the approach. One might say that this is like stating that an architectural style used for building homes would result in dangerous dwellings. He supposedly shows that the entire architectural style is faulty. This is where there can be overstatement as to the implications, though. While he shows, perhaps, that an entire agenda or plan results in undecidability for a large number or infinite number of propositions, of certain kinds, it does not imply that certain errors cannot be overlooked, for being unimportant. This would be like finding if an architectural style has serious defects which lead to some undesirable appearance on a large number of occasions, but that this error is not one that does not make it less desirable overall. The implications have to be well understood in their effects for various disciplines, and what it means pragmatically for various objectives mathematicians and computer scientists have.

It appears that Gödel’s work has not blocked the progress of computing, which has its foundation in logic.

Here I try to show the meaning of Gödel’s work by trying to do what Gödel did in another way, with modern knowledge about computing and systems architecture in mind. When Gödel wrote the computer did not exist at all. A field of expertise I have is computer software architecture, and this field did not at all exist at the time that Gödel was writing. Some things included in his paper are intensely interesting because they seem to presige some elements of the fundamentals of computer architecture, like how symbols would be encoded into numbers, which we know are binary. He uses a very different approach to encoding his symbols which do not include anything like the approach in modern computers, but instead he encodes everything in a sequence of natural numbers.

*To be continued…*

Here are terms used by Gödel which require clarification and definition for readers to follow along.

- formula
- proof Schema
- proposition
- axiom
- undecidable proposition
- unprovable or provable proposition
- recursion
- class-sign
- relation
- function
- substitution
- recursively defined
- symbol
- sign
- calculable

These are the fundamental axioms and their justifications. These are those that Gödel is permitted to use in his proof.

These are the rules of inference and justifcations of those rules, which Gödel is permitted to use.

Gödel’s work is one that has a number of claims that are not directly part of his thesis. These must be tracked, in addition to those which are part of his thesis, to identify what omissions may exist, and which supposed implications and conclusions are not substantiated. It is not unusual that a work have statements which were not subsequently edited out, in order to tighten up and focus the paper on the primary objective of the author. For this paper, there are many readers who do not understand, who are willing to support various purported conclusions and implications, which may or may not have realy been substantiated. For that reason I am giving the paper a careful reading to track all the claims he has really made and which of those claims have been supported, or were attempted to be supported.

The purpose of this is to track issues in his argumentation. His paper does not proceed precisely as he says it will in his introduction to the prrof, which is supposed to be less exacting. There appear to be omissions, sudden unjustified introductions of technique, and various errors in the work, which need to be tracked. These issues do not mean that Gödel has been unsuccessful in carrying out his objective, but it does imply his paper is not as easy to read and understand, and not as well edited, as one might desire. In the course of this analysis, however, it may alternatively be found that, his errors or changes and introductions of methods unexpected actually do harm to his argument, and perhaps he has not proven what he claims to have proven.

This paper utilizes many assumptions which are carried from other authors, or from the discipline of mathematics itself, in branches already developed. Reliance upon these authors, their work, already crated axioms, already created rules of inference, and other portions of mathematics may result in importation of errors. He has not developed he topic afresh and many omissions in explanation are made, with the effect that Gödel places upon the reader the burden of researching and studying all the dependencies in advance, or in the course of laborious parallel reading. There are some dependencies that do not seem justified, going on initial intuitions, which do come from experience in related areas of interest including logic, mathematics, and computer science. I will utilize this list to see which dependencies appear trustworthy and which might not be. It is unlikely given the density and length of materials Gödel relied on to definitively approve each and every dependency with authority. However, letting the reader know which portions I have taken for being reliable can be used for further analysis and potentially further corroboration or refutation of Gödel’s claims.

- Peano’s axioms
- Russell and Whitehead’s
*Principia Mathematica*and their rules of inference there adopted. - Hilbert’s logical syntax
- Zermelo-Fraenkel’s work
- External mathematical techniques requiring detailed listing (later).
- Rules of Proof (proof theoretic rules relied upon).

Whatever methods of proof used by Gödel that are central to his argument require explication.

This section includes my notes for my own personal research. Areas where I am aware I need additional information in order to more accurately understand Gödel’s statements.

For research:

- Peano’s axioms.
- Russell’s theory of types, and whether there is a relationship to the typelifting described by Gödel.
- A clear definition of free variables in math and logic.

In this section I list questions and criticisms, which may target Gödel’s claims, but may also critique approaches of any of the other related mathematicians and their mathematics. These questions and criticisms may relate to works I have in progress that have different directions. I admit here that my inclination is contrary to Gödel’s, and my work has a different trajectory. I have many questions and criticisms that either relate to his supposed results, or else the implications of his paper which may or may not be justified, given what he really shows, and what he does not show.

- Serios flaw: What are the boundaries of elements utilized within the systems and what is used outside. He doesn’t offer any demarcation that is obviously separating PM and ZF from mathematics proper. This means the paper really is cloudy about what the systems contain and what math contains, and with the thesis it has it seems this is a real problem. Gödel should be reminding the reader, when he is operating within the systems being criticized, and when he is not, or when he is blending. He doesn’t do this. Consider how he introduces recursion. He says it is a paranthetical consideration, but then offers 45 separate formulae on 4 propositions, which begin his list of propositions, used for critique, but that recursion is external it appears to anything utilized wthin PM and ZF (to be confirmed). External to the system vs. internal.
- On formally undecidable propositions of pm… undecidable from which perspective, from the perspective of the worker within the system of pm, or from the perspective of the worker in math external to PM, utilizing recursion and various other tools, not used in PM.
- Specific samples or examples of undecidable propositions have to be shared. For each shared, the perspective is required stating the undecidability is needed. Undecidable with respect to what. Oddly, a human observer is missing in the math that would not be missing in the physics. If I’m going to be the observer, and I’m going to see and confirm, that the proposition is udnecidable, I should know on what basis that is and where I am. Am I stuck inside PM and its rule set, or am I outside it, with other rule sets?
- What is the repeatable test of undecidability which might be re-used or replicated, for other systems? “I showed it for all systems and didn’t state what the test was, and it is not repeatable?” Unless you never need to? That would be strange wouldn’t it?
- What are all the rules of math that provide its foundation? This is what Russell and Whitehead and others are trying to do, but it’s a big topic, so likely some is left out. Enter Gödel. What I’m not yet clear on concerning Gödel, is if he thinks within a constrained system, statements are undecidable. It appears that is not what he is doing, and rather, he is showing the gap. Here are some propositions which are generated by your system which are still undecidable. Is the decidability test within our outside the systems? Intuitively, what this appears to be is a complaint that it doesn’t capture all of math yet, and that certain propostions (and he says propositions about whole numbers), are undecidable, within the system. This is like asking “What is undecidable outside of the system, within the system, that can be expressed because of a defect on the definition of what can be expressed?” It seems like the strategy there would be a forced import of external math, on an error of what can be imported. And that import results in an undecidability in the system. Else, something expressible in the system, on utility of mathematical rules outside of the system, are shown to be undecidable. What is used from outside of the systems created by RW, ZF, and Peano, would be critical to knowing what is happening in the proof.
- How are the natural numbers that are mapped to formulae and propositions in Gödel decoded?
- How does his mapping of natural numbers to formulae and propositions compare with existing approaching in computer systems, using binary to ascii or UTF-8?

Key Criticisms:

- The work is vague in many locations and the reader is not guided on a path that would be clear to his best readers.
- Not indicating the level of analysis, when he is moving from concrete to the metamathematical, and when his statements are composed of statements at various levels which would require significant thinking to reduce or resolve.
- Lack of visual diagrams supporting the above purpose.
- Lack of definitions of certain terms relied upon, that the reader may not understand, such as his use of “type”.
- Movement to methods of argument that do not appear to be planned in the beginning of the paper. In other words, one reads his supposed proof approach and sees one thing, but then gets something quite different when one reads his actual proof.

Mattanaw I., formerly "Christopher Matthew Cavanaugh"

I am a retired executive, software architect, and consultant, with professional/academic experience in the fields of Moral Philosophy and Ethics, Computer Science, Psychology, Philosophy, and more recently, Economics. I am a Pandisciplinarian, and Lifetime Member of the High Intelligence Commmunity.

Articles on this site are eclectic, and draw from content prepared between 1980 and 2024. Topics touch on all of lifes categories, and blend them with logical rationality and my own particular system of ethics.