Sunday, March 17, 2024

Homomorphic encryption as an elegant manner to save privacy

Sabine Hossenfelder talked about homomorphic encryption, which is an elegant and extremely general algebraic manner to guarantee data privacy (see this). The idea is that the encryption respects the algebraic operations: sums go to sums and products go to products. The processing can be done for the encrypted data without decryption. The outcome is then communicated to the user and decrypted only at this stage. This saves a huge amount of time.

What comes first in mind is Boolean algebra (see this). In this case the homomorphism is truth preserving. The Boolean statement formed as a Boolean algebra element is mapped to the same statement but with images of the statements replacing the original statements. In the set theoretic realization of Boolean algebra this means that unions are mapped to unions and intersections to intersections. In Boolean algebra, the elements are representable as bit sequences and sum and product are done element-wise: one has x2=1 and x+x=0. Ordinary computations can be done by representing integers as bit sequences.

In any computation one must perform a cutoff and the use of finite fields is the neat way to do it. Frobenius homomorphism x→xp in a field of characteristic p maps products to products and, what is non-trivial, also sums to sums since one has (x+y)p= xp+yp. For finite fields F_p the Frobenius homomorphism is trivial but for Fpe, e>1, this is not the case. The inverse is in this case x→x pe-1. These finite fields are induced by algebraic extensions of rational numbers. e corresponds to the dimension of the extension induced by the roots of a polynomial

Frobenius homomorphism extends also to the algebraic extensions of p-adic number fields induced by the extensions of rationals. This would make it possible to perform calculations in extensions and only at the end to perform the approximation replaces the algebraic numbers defining the basis for the extension with rationals. To guess the encryption one must guess the prime that is used and the use of large primes and extensions of p-adic numbers induced by large extensions of rationals could keep the secrecy.

p-Adic number fields are highly suggestive as a computational tool as became clear in p-adic thermodynamics used to calculate elementary particle masses: for p= M127= 2127-1 assignable to electron, the two lowest orders give practically exact result since the higher order corrections are of order 10-76. For p-adic number fields with very large prime p the approximation of p-adic integers as a finite field becomes possible and Frobenius homomorphism could be used. This supports the idea that p-adic physics is ideal for the description of cognition.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Saturday, March 16, 2024

Direct evidence for the TGD view of quasars

In a new paper in The Astrophysical Journal (see this), JILA Fellow Jason Dexter, graduate student Kirk Long, and other collaborators compared two main theoretical models for emission data for a specific quasar, 3C 273. The title of the popular article is "Unlocking the Quasar Code: Revolutionary Insights From 3C 273".

If the quasar were a blackhole, one would expect two emission peaks. If the galactic disk is at constant temperature, one would expected redshifted emission peak from it. The second peak would come from the matter falling to the blackhole and it would be blueshifted relative to the first peak. Only single peak was observed. Somehow the falling of the matter is prevented to the quasar is prevented. Could the quasar look like a blackhole-like object in its exterior but emit radiation and matter preventing the falling of the matter to it.

This supports the TGD view of quasars as blackhole-like objects are associated with cosmic strings thickened locally to flux tube tangles (see this, this, this and this). The transformation of pieces of cosmic strings to monopole flux tube tangles would liberate the energy characterized by the string tension as ordinary matter and radiation. This process would be the TGD analog of the decay of inflaton field to matter. The gravitational attraction would lead to the formation of the accretion disk but the matter would not fall down to the quasar.

See the article About the recent TGD based view concerning cosmology and astrophysics or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Friday, March 15, 2024

Magnetite produced by traffic as a possible cause of Alzheimer disease

A rather unexpected partial explanation for Alzheimer's disease has been found: magnetite particles, which can be found in urban environments from exhaust gases containing breathing air (see this). I have written earlier about Alzheimer's disease from the TGD point of view (see this). Magnetite particles seem to be found in the hippocampus of those with the disease, which is central to memory. Now it has been found that the exposure of mice to magnetite leads to a generation of Alzheimer disease. The overall important message to the decision makers is that the pollution caused by the traffic in urban environment could be an important cause of Alzheimer disease.

The brain needs metabolic energy. Hemoglobin is central to the supply of metabolic energy because it binds oxygen. Could it be thought that Alzheimer's is at least partially related to a lack of metabolic energy in the hippocampus? In the sequel I will consider this explanation in the TGD framework.

Short digression to TGD view of metabolism

Oxygen molecules O2 bind to iron atoms in hemoglobin (see this) that already have a valence bond with 5 nitrogen atoms and a bond is created where Fe has received 5 electrons and a sixth from oxygen molecule O2. So Fe behaves the opposite of what you would expect and hemoglobin is very unusual chemically!

Phosphate O=PO3, or more precisely phosphate ion O=P(O-)3), which also plays a central role in metabolism, also breaks the rules: instead of accepting 3 valence electrons, it gives up 5 electrons to oxygen atoms.

Could the TGD view of quantum biology help to understand what is involved. Dark protons created by the Pollack effect provide a basic control tool of quantum biochemistry in TGD. Could they be involved now. Consider first the so-called high energy phosphate bond, which is one of the mysteries of biochemistry.

  1. Why the electrons in the valence bonds prefer to be close to P in the phosphate ion? For phosphate one would expect just the opposite. The negative charge of 3 oxygens could explain why electrons tend to be nearer to P.
  2. The TGD based view of metabolism allows to consider a new physics explanation in which O=P(O-)3 is actually a "dark" variant of neutral O=P(OH)3 in which 3 protons of OH have become dark (in the TGD sense) by Pollack effect, which has kicked 3 protons to monopole flux tubes of the gravitational magnetic body of phosphate to such a large distance that the resulting dark OH looks like OH-, that is negatively charged. Charge separation between the biological body and magnetic body would have occurred. This requires metabolic energy basically provided by the solar radiation. One could see the dark phosphate as a temporary metabolic energy storage and the energy would be liberated when ATP transforms to ADP.
Could this kind of model apply also to the Fe binding with 5 N atoms in haemoglobin by valence bonds such that, contrary to naive expectations, electrons tend to be closer to Fe than N atoms? Can one imagine a mechanism giving an effective negative charge to the N atoms or the heme protein and to O-O?
  1. In this case there are no protons as in the case of phosphate ions. The water environment however contains protons and pH as a negative logarithm of the proton concentration measures their concentration. pH=7 corresponds to pure water in which H+ and OH- concentrations are the same. The hint comes from the fact that small pH, which corresponds to a high proton concentration, is known to be favourable for the binding of oxygen to the heme group.
  2. Could dark protons be involved and what is the relationship between dark proton fraction and pH? Could pH measure the concentration of dark protons as I have asked?
  3. Could the transformation of ordinary protons to dark protons at the gravitational MB of the heme protein induce a negative charge due to OH- ions associated with the heme protein and could this favour the transfer of electrons towards Fe? Could the second O of O-O form a hydrogen bond with H such that the proton of the hydrogen bond becomes dark and makes O effectively negatively charged?

What the effect of magnetite could be?

Magnetite particles, .5 micrometers in size, consist of Fe3O4 molecules containing iron and oxygen. According to Wikipedia, magnetite appears as crystals and obeys the chemical formula Fe2+(Fe3+)2(O-2)4. The electronic configuration is [Ar] 3d6 4s2 and 3 Fe ions have donated besides the s electrons also one electron to oxygen.

Could it happen that somehow the oxygen absorption capacity of hemoglobin would decrease, that the amount of hemoglobin would decrease, or that oxygen would bind to the magnetite molecules on the surface of the magnetite particle? For example, could you think that some of the O2 molecules bind to Fe3O4 molecules instead of hemoglobin at the surface of the magnetite. Carbon monoxide is dangerous because it binds to the heme. Could it be that also the magnetite crystals do the same or rather could heme bind to them (thanks for Shamoon Ahmed for proposing this more reasonable looking option).

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Wednesday, March 13, 2024

About the problem of two Hubble constants

The usual formulation of the problem of two Hubble constants is that the value of the Hubble constant seems to be increasing with time. There is no convincing explanation for this. But is this the correct way to formulate the problem? In the TGD framework one can start from the following ideas discussed already earlier (see this).
  1. Would it be better to say that the measurements in short scales give slightly larger results for H0 than those in long scales? Scale does not appear as a fundamental notion neither in general relativity nor in the standard model. The notion of fractal relies on the notion but has not found the way to fundamental physics. Suppose that the notion of scale is accepted: could one say that Hubble constant does not change with time but is length scale dependent. The number theoretic vision of TGD brings brings in two length scale hierarchies: p-adic length scales Lp and dark length scale hierarchies Lp(dark)=nLp, where one has heff=nh0 of effective Planck constants with n defining the dimension of an extension of rationals. These hierarchies are closely related since p corresponds to a ramified prime (most naturally the largest one) for a polynomial defining an extension with dimension n.
  2. I have already earlier considered the possibility that the measurements in our local neighborhood (short scales) give rise to a slightly larger Hubble constant? Is our galactic environment somehow special?
Consider first the length scale hierarchies.
  1. The geometric view of TGD replaces Einsteinian space-times with 4-surfaces in H=M4\times CP2. Space-time decomposes to space-time sheets and closed monopole flux tubes connecting distant regions and radiation arrives along these. The radiation would arrive from distant regions along long closed monopole flux tubes, whose length scale is LH. They have thickness d and length LH. d is the geometric mean d=(lPLH)1/2 of Planck length LP and length LH. d is of about 10-4 meters and size scale of a large neuron. It is somewhat surprising that biology and cosmology seem to meet each other.
  2. The number theoretic view of TGD is dual to the geometric view and predicts a hierarchy of primary p-adic length scales Lp ∝ p1/2 and secondary p-adic length scales L2,p =p1/2Lp. p-Adic length scale hypothesis states that p-adic length scales Lp correspond to primes near the power of 2: p ≈ 2k. p-adic primes p correspond to so-called ramified primes for a polynomial defining some extension of rationals via its roots.

    One can also identify dark p-adic length scales

    Lp(dark) =nLp ,

    where n=heff/h0 corresponds to a dimension of extension of rationals serving as a measure for evolutionary level. heff labels the phases of ordinary matter behaving like dark matter explain the missing baryonic matter (galactic dark matter corresponds to the dark energy assignable to monopole flux tubes).

  3. p-Adic length scales would characterize the size scales of the space-time sheets. The Hubble constant H0 has dimensions of the inverse of length so that the inverse of the Hubble constant LH∝ 1/H0 characterizes the size of the horizon as a cosmic scale. One can define entire hierarchy of analogs of LH assignable to space-time sheets of various sizes but this does not solve the problem since one has H0 ∝ 1/Lp and varies very fast with the p-adic scale coming as a power of 2 if p-adic length scale hypothesis is assumed. Something else is involved.
One can also try to understand also the possible local variation of H0 by starting from the TGD analog of inflation theory. In inflation theory temperature fluctuations of CMB are essential.
  1. The average value of heff is < heff>=h but there are fluctuations of heff and quantum biology relies on very large but very rare fluctuations of heff. Fluctuations are local and one has <Lp(dark)> = <heff/h0> Lp. This average value can vary. In particular, this is the case for the p-adic length scale Lp,2 (Lp,2(dark)=nL2,p), which defining Hubble length LH and H0 for the first (second) option.
  2. Critical mass density is given by 3H02/8πG. The critical mass density is slightly larger in the local environment or in short scales. As already found, for the first option the fluctuations of the critical mass density are proportional to δ n/n and for the second option to -δ n/n. For the first (second) option the experimentally determined Hubble constant increases when n increases (decreases). The typical fluctuation would be δ heff/h ∼ 10-5. What is remarkable is that it is correctly predicted if the integer n decomposes to a product n1=n2 of nearly identical or identical integers.

    For the first option, the fluctuation δ heff/heff=δn/n in our local environment would be positive and considerably larger than on the average, of order 10-2 rather than 10-5. heff measures the number theoretic evolutionary level of the system, which suggests that the larger value of <heff> could reflect the higher evolutionary level of our local environment. For the second option the variation would correspond to δn/n≤ 0 implying lower level of evolution and does not look flattering from the human perspective. Does this allow us to say that this option is implausible?

    The fluctuation of heff around h would mean that the quantum mechanical energy scales of various systems determined by <heff>=h vary slightly in cosmological scales. Could the reduction of the energy scales due to smaller value of heff for systems at very long distance be distinguished from the reduction caused by the redshift. Since the transition energies depend on powers of Planck constant in a state dependent manner, the redshifts for the same cosmic distance would be apparently different. Could this be tested? Could the variation of heff be visible in the transition energies associated with the cold spot?

  3. The large fluctuation in the local neighbourhood also implies a large fluctuation of the temperature of the cosmic microwave background: one should have δT/T ≈ δn/n≈ δ H0/H0. Could one test this proposal?
See the article About the recent TGD based view concerning cosmology and astrophysics or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Herbig Haro objects from the TGD point of view

The Youtube posting "The James Webb Space Telescope Has Just Made an Incredible Discovery about Our Sun! Birth of Sun"!" (see this) tells about Herbit Haro object HH211 located at a distance 1000 light years about which JWQST has provides a picture (I hope that the sensationalistic tone of the title does not irritate too much: it seems that we must learn to tolerate this style).

Herbig Haro luminous objects are associated with very young stars, protostars. Typically they involve a pair of opposite jets containing streams of matter flowing with a very high speed of several hundred km/s. The jets interact with the surrounding matter and generate luminous regions. HH211 was the object studied by JWT. The jets were found to contain CO, SiO, H2.

Herbig Haro objects provide information about the very early states of star formation. As a matter of fact, the protostar stage still remains rather mysterious since the study of these objects is very challenging already because their distances are so large. The standard wisdom is that stars are born, evolve and explode as supernovae and that the remnants of supernovae provide the material for future stars so that the portion of heavy elements in their nuclei should gradually increase. The finding that the abundances of elements seem to depend only weakly on cosmic time seems to be in conflict with these findings and forces us to ask whether the vision about the protostars should be modified. Also JWT found that the galaxies in the very young Universe can look like the Milky Way and could have element abundances of recent galaxies which challenges this belief.

The association of the jets to Herbig Haro objects conforms with the idea that cosmic strings or monopole flux tubes formed from them are involved with the formation of a star. One can consider two options for how the star formation proceeds in the TGD Universe.

  1. The seed for the star formation comes from the transformation of dark energy associated with the cosmic string or monopole flux tube to ordinary matter (it could also correspond to a large heff phase and behave like dark matter and. explain the missing baryonic matter). By the conservation of the magnetic flux the magnetic energy density per unit length of the monopole flux tube behaves like 1/S and decreases rapidly with its transversal area. The volume energy density increases like area but its growth is compensated by the phase transition reducing the value of the analog of cosmological constant Λ so that on the average this contribution behaves as a function of the p-adic length scale. In the same way as magnetic energy per unit length. The energy liberated from the process is however rather small except for almost cosmic strings and this process might apply only to the formation of first generation stars.
  2. The second option is that the process is analogous to "cold fusion" interpreted in the TGD framework as dark fusion (see this, this and this) in which ordinary matter, say protons and perhaps even heavier nuclei, are transformed to dark protons at the monopole flux tubes having much larger Compton length (proportional to heff) that ordinary protons or nuclei. If the nuclear binding energy scales like 1/heff for dark nuclei nuclear potential wall, is rather low and the dark fusion can take place at rather low temperatures. The dark nuclei would then transform to ordinary nuclei and liberate almost all of their ordinary nuclear binding energy, which would lead to a heating which would eventually ignite the ordinary nuclear fusion at the stellar core. Heavier nuclei could be formed already at this stage rather than in supernova explosions. This kind of process could occur also at the planetary level and produce heavier elements outside the stellar cores.

    This process in general requires energy feed to increase the value of heff. In living matter the Pollack effect would transform ordinary protons to dark protons. The energy could come from solar radiation or from the formation of molecules, whose binding energy would be used to increase heff (see this). This process could lead to the formation of molecules observed also in the jets from HH211. Of course, also the gravitational binding energy liberated as the matter condenses around the seed liberates and could be used to generate dark nuclei. This would also raise the temperature helping to initiate dark fusion. The presence of the dark fusion and the generation of heavy elements already at this stage distinguishes between this view and the standard picture.

    The flux tube needed in the process would correspond to a long thickened monopole flux tube parallel to the rotation axis of the emerging star. Stars would be connected to networks by these flux tubes forming quantum coherent structures (see this). This would explain the correlations between very distant stars difficult to understand in the standard astrophysics. The jets of the Herbig Haro object parallel to the rotation axis would reveal the presence of these flux tubes. The translational motion of matter along a helical flux tube would generate angular momentum. They would make possible the transfer of the surplus angular momentum, which would otherwise make the protostar unstable. By angular momentum conservation, the gain of the angular momentum by the protostar could involve generation of opposite angular momentum assignable to the dark monopole flux tubes.

See the article About the recent TGD based view concerning cosmology and astrophysics or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Monday, March 11, 2024

Blackhole-like object as a gravitational harmonic oscillator?

As described, in the TGD Universe blackhole-like objects are monopole flux tube spaghettis and differ from the ordinary stars only in that for blackholes the entire volume is filled by monopole flux tubes with heff=h for which the thickness is minimal and corresponds to a nucleon Compton length. For heff>h also the flux tubes could fill the entire volume of the star core.

Just for fun, one can ask what the model of a gravitational harmonic oscillator gives in the case of Schwarzschild blackholes. The formula, rn= n1/2r1, r1/R= [rs/2β01/2]×(rs/R)1/4, gives for R= rs the condition r1/rs= 1/(2β0)1/2. β0≤ 1/2 gives r1/rs≥ 1 so that there would be no other states than the possible S-wave state (n=0). β0=1/2 gives r1=rs and one would have just mass at n=0 S-wave state and n=1 orbital. For β0=1 (the minimal value), one has r1/rs= (1/2)1/2 and r2=rs would correspond to the horizon. There would be an interior orbit with n=1 and the S-wave state could correspond to n=0.

The model can be criticized for the fact that the harmonic oscillator property follows from the assumption of a constant mass density. This criticism applies also in the model for stars. The constant density assumption could be true in the sense that the mass difference M(n+1)-M(n) at orbitals rn+1 and rn for n≥ 1 is proportional to the volume difference Vn+1-Vn proportional to rn+13-rn3= (n+1)3-n3= 3n2+3n+1. This would give M= m0+m(nmax+1)3 leaving only the ratio of the parameters m0 and m free. This could be fixed by assigning to the S-wave state a radius and constant density. This condition would give an estimate for the number of particles, say neutrons, associated with the oscillator Bohr orbits. If a more realistic description in terms of wave functions, this condition would fix the total amount of matter at various orbitals associated with a given value of n.

See the article About the recent TGD based view concerning cosmology and astrophysics or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Are blackholes really what we believe them to be?

James Webb produces surprises at a steady rate and at the same time is challenging the standard view of cosmology and astrophysics. Just when I had written an article about the most recent findings, which challenged the basic assumption of these fields including those of general relativity and thought that I could rest for a few days, I learned from a new finding in FB. The title of the popular Youtube video (see this) was "James Webb Telescope Just Captured FIRST, Ever REAL Image Of Inside A Black Hole!"

Gravitational lensing is the method used to gain information about these objects and it is good to start with a brief summary of what is involved. One can distinguish between different kinds of lensings: strong lensing, weak lensing, and microlensing.

  1. In the strong lensing (see this), the lense is between the observer and the source of light so that the effect is maximized. For high enough mass of the lense, lensing causes multiple images, arcs or Einstein rings. The lensing object can be a galaxy, a galaxy cluster or a supermassive blackhole. Point-like objects one can have multiple images and for extended emissions rings and arcs are possible.

    The galactic blackhole, SgrA*, at the center of the Milky Way at distance of 27,000 light-years was imaged in 2022 by the Event Horizon Telescope (EHT) Collaboration (see this) using strong gravitational lensing and radio telescope network in a planetary scale. The blackhole was seen as a dark region at the center of the image. The same collaboration observed the blackhole in the M87 galaxy at a distance of 54 million light years already in 2019.

  2. In the weak lensing (see this), the lense is not between the observer and the source so that the effect is not maximized. Statistical methods can be however used to deduce information about the source of radiation or to deduce the existence of a lensing object. The lensing effect magnifies the image of (convergence effect) and streches the image of the object (shear effect). For instance, weak lensing led quite recently to a detection of linear objects, which in the TGD framework could correspond to cosmic strings (see this)">inflatgd2024 which are the basic objects in TGD based cosmology and model for galaxies, stars and planets.
  3. In microlensing (see this) the gravitational lense is small such as planets moving between the observer and star serving as the light source. In this case the situation is dynamic. The lensing can create two images for point-like objects but these need not be distinguishable so that the lense serves as a magnifying glass. The effect also allows the detection of lense-like objects even if they consist of dark matter.
The recent results of JWT the findings of JWT are about a supermassive blackhole located 800 million light years away. Consider first the GRT based interpretation of the findings.
  1. What was observed by the strong lensing effect was interpreted as follows. The matter falling into the blackhole was heated and generated an X-ray corona. This X-ray radiation was reflected back from a region surrounding the blackhole. The reflection could be based on the same effect as the long wavelength electromagnetic radiation from the ionoshere acting as a conductor. This requires that the surface of the object is electrically charged, and TGD indeed predicts this for all massive objects and this electric charge implies quantum coherence in astrophysical scales at the electric flux tubes (see this), which would be essential for the evolution of life at Earth.
  2. After this the radiation, which was reflected behind the blackhole should have ended up in the blackhole and stayed there but it did not! Somehow it got through the blackhole and was detected. It would seem that the blackhole was not completely black. This is not all the behavior of a civilized blackhole respecting the laws of physics as we understand them. Even well-behaving stars and planets would not allow the radiation to propagate through them. How did the reflected X ray radiation manage to get through the blackhole? Or is the GRT picture somehow wrong?
Consider first the GRT inspired interpretation. Could the TGD view of blackhole-like objects come to rescue?
  1. In TGD, monopole flux tube tangles generated by the thickening of cosmic strings (4-D string-like objects in H=M4× CP2) and producing ordinary matter as the dark energy of the cosmic strings is liberated (see this) are the building bricks of astrophysical objects including galaxies, stars and planets. I have called these objects flux tube spaghettis.

    Einsteinian blackholes, identified as singularities with a huge mass located at a single point, are in the TGD framework replaced with topologically extremely complex but mathematically and physically non-singular flux tube spaghettis, which are maximally dense in the sense that the flux tube spaghetti fills the entire volume (see this). The closed flux tubes would have thickness given by the proton Compton length. From the perspective of the classical gravitation, these blackholes-like objects behave locally like Einsteinian blackholes outside the horizon but in the interior they differ from the ordinary stars only in that the flux tube spaghetti is maximally dense.

  2. The assumption, which is natural also in the TGD based view of primordial cosmology replacing the inflation theory, is that there is quantum coherence in the length scale of the flux tubes, which behave like elementary particles even when the value of heff is heff=nh0=h or even smaller. What does this say is that the size of the space-time surface quite generally defines the quantum coherence length. The TGD inspired model for blackhole-like objects suggests heff=h inside the ordinary blackholes. The flux tubes would contain sequences of nucleons (neutrons) and would have a thickness of proton Compton length. For larger values of heff, the thickness would increase with heff and the proposal is that also stellar cores are volume filling black-hole like objects (see this).

    Besides this, the protons at the flux tubes can behave like dark matter (not the galactic dark matter, which in the TGD framework would be dark energy associated with the cosmic strings) in the sense that they can have very large value of effective Planck constant heff=nh0, where h0 is the minimal value of heff (see this). This phase would solve the missing baryon problem and play a crucial role in quantum biology. In the macroscopic quantum phase photons could be dark and propagate without dissipation and part of them could get through the blackhole-like object.

  3. How could the X-rays manage to get through the supermassive black hole? The simplest option is that the quantum coherence in the length scale of the flux tube containing only neutrons allows photons to propagate along it even when one has heff=h. The photons that get stuck to the flux tube loops would propagate several times around the flux tube loop before getting out from the blackhole in the direction of the observer. In this way, an incoming radiation pulse would give rise to a sequence of pulses.
I have considered several applications of this mechanism.
  1. I have proposed that the gravitational echoes detected in the formation of blackholes via the fusion of two blackholes could be due this kind of stickins inside a loop (see this). This would generate a sequence of echoes of the primary radiation burst.
  2. The Sun has been found to generate gamma rays in an energy range in which this should not be possible in standard physics (see this). The explanation could be that cosmic gamma rays with a very high energy get temporarily stuck at the monopole flux tubes of the Sun so that Sun would not be the primary source of the high energy gamma radiation.
  3. The propagation of photons could be possible also inside the Earth along, possibly dark, monopole flux tubes, at which the dissipation is small. The TGD based model for Cambrian explosion (see this, this and this) proposes that photosynthesizing life evolved in the interior of Earth and bursted to the surface of Earth in the Cambrian explosion about 450 million years ago. The basic objection is that photosynthesis is not possible in the underground oceans: solar photons cannot find their way to these regions. The photons could however propagate as dark photons along the flux tubes. The second option is that the Earth's core (see this) and this) provides the dark photons, which would be in the same energy range as solar photons. The mechanism of propagation would be the same for both options.
In the TGD framework, one must of course take the interpretation of the findings inspired by general relativity with a grain of salt. The object called supermassive blackhole-like could be something very different from a standard blackhole. If it is a thickened portion of a cosmic string, it emit particles instead of absorbing them in an explosion-like thickening of cosmic string transforming dark energy to matter and radiation (this would be TGD counterpart for the decay of inflation fields to matter (see this, this and this). Of course, the matter bursting into the environment from a BH-like object would tend to fall back and could cause the observed phenomena in a way discussed above. The X-rays identified as the reflected X-rays could correspond to this kind of X-rays reflected back from the blackhole-like object. I am not a specialist enough to immediately choose between these two options.

See the article About the recent TGD based view concerning cosmology and astrophysics or the chapter with the same title.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Thursday, March 07, 2024

Counter teleportation and TGD

Tuomas Sorakivi sent links to interesting articles related to the work of Fatim Salih (see this and this). Salih is a serious theorist and the recent scandalous wormhole simulation using quantum computer (Sycamore) is not related to him.

Salih introduces the concept of counter teleportation. It is communication that does not involve classical or quantum signals (photons). Counterfactuality is a basic concept: the first web source that one finds tells "Counterfactuals are things that might have happened, although they did not in fact happen. In interaction-free measurements, an object is found because it might have absorbed a photon, although actually it did not."

The example considered by Salih is as follows.

  1. Consider a mirror system consisting of a) fully reflective mirrors and b) mirrors that let through the horizontal polarization H and reflect the vertical polarization V. The system consists of two paths: A and B. In the first mirror, which is type b) mirror, the signal splits into two parts, H and V and which propagate along A and B. At the end the signals meet in a type b) mirror and H goes through to detector D1 and V is reflected and ends up to detector D2.
  2. The polarization H going through b) mirro at the first step travels along the path A. It contains only one fully reflective mirror and the beam reflected from it ends up in the downstream mirror of type b) as H type polarization and goes to the detector D1.
  3. In the first step, the reflected V travels along the path B. The path B contains many steps and with each step the polarization is slightly rotated so that the incoming polarization V transforms into H at the end but with a phase opposite to that of H coming along A. It interferes to zero from A with the future contribution and detector D2, which registers V and clicks.

    I'm not sure, but I think that in the B-path mirrors, the polarization directions H and V are chosen so that nothing gets through. Hence "counterfactuality". There is no interaction with photons: only the possibility of it and this is enough.

  4. Bob can control path B and can block it so that nothing can get through. The result is that only the signal coming from path A gets through and travels to detector D1. Bob can therefore communicate information to Alice. For instance, at moments of time t_n=nt_0 he can block or open path B. The result is a string of bits that Alice observes. This is communication without photons or classical signals.
The basic question is what does the blocking of channel B mean in the language of theoretical physics. It is a mesoscopic or even macroscopic operation. That's where Bob comes in as a conscious, intentional entity. Here recent theoretical physics cannot help.

Salih realizes that this is something new that standard quantum physics cannot describe. Such a situation leads to a paradox. Salih considers many options, starting from different interpretations of quantum measurement theory.

  1. "Weak measurement", as introduced by Aharonov and his colleagues, is one option presented. In the name of honesty, it is necessary to be politically incorrect and say that this model is already mathematically inconsistent.
  2. "Consistent history approach" is another option that was hoped to solve the measurement problem of quantum mechanics. It gives up the concept of unitary time evolution. Also this model is mathematically and conceptually hopelessly ugly. A mathematician could never consider such an option, but emergency does not read the law.
  3. Wormholes as a cause or correlate of quantum entanglement is the third attempt to describe the situation. The problem is that they are unstable and the ER-EPR correspondence has not led to anything concrete even though there are scary big names behind it. Salih also suggests a connection with quantum computation but this connection is extremely obscure and requires something like AdS/CFT.

    Here, however, I think Salih is on the right track: it has been realized that the solution to the problem is at the space-time level. The ordinary trivial topology of Minkowski space is not enough. The question is how to describe geometric objects like this experimental setup on a fundamental level. In the standard model, they are described phenomenologically by means of matter densities, and this is of course not enough at the quantum level.

What does TGD say? TGD brings a new ontology both at the space-time level and in quantum measurement theory.
  1. In addition to elementary particles, TGD brings to quantum physics the geometric and topological degrees of freedom related to the space-time surfaces. A description of the observed physical objects of different scales is obtained: typically they correspond to a non-trivial space-time topology. Spacetime is not a flat M^4, not even its slightly curved GRt variant, but a topologically extremely complex 4-surface with a fractal structure: space-time sheets glued to larger space-time sheets by wormhole contacts, monopole flux tubes, etc...
    1. The system just considered corresponds to two different space-time topologies. Photons can travel a) along path A (blocking) or b) along both paths A and B simultaneously (no blocking).
    2. Bob has a spacetime the competence of a topology engineer and can decide which option is realized by blocking or opening channel B by changing the spacetime topology.
    3. Describing this operation as a quantum jump means that Bob is quantum-entangled with the geometric and topological degrees of freedom of channel B. The initial state is a superposition of open B and closed B. Bob measures whether the channel is open or closed and gets the result "open" or "closed". The outcome determines what Alice observes. Monopole flux tubes replacing wormholes of GRT serve as correlates and prerequisites for this entanglement.
    The controlled qubit (channel B open or closed) is macro- or at least nanoscopic and cannot be represented by the spin states of an elementary particle.

    Note that the experimental arrangement under consideration corresponds logically to cnot operation. If channel B is closed, nothing happens to the incoming signal and it ends up in D1. If B is open, then the signal ends up at detector D2. cnot would be realized by bringing in Bob as the controller that affects the space-time topology. This kind of control could make possible human-quantum computer interaction and if ordinary computers can have quantum coherence in time scales longer than clock period (in principle possible in the TGD Universe!), also human-computer interaction. As a matter of fact, there is evidence for this kind of interaction: a chicken gets marked to a robot and the behavior of the robot begins to correlate with that of the chicken! Maybe cnot coupling with the random number generator of the robot is involved!

  2. The second requirement is quantum coherence in meso- or even macroscopic scales. Number-theoretic TGD predicts a hierarchy of effective Planck's constants h_{eff}, which label to the phases of ordinary matter, which can be quantum-coherent on an arbitrarily long length and time scales. These phases behave like dark matter and explain the missing baryonic matter whereas dark energy in the TGD sense explains galactic dark matter. They enable quantum coherence at the nano- and macro levels.
    1. This makes possible the mesoscopic quantum entanglement and brings to quantum computation the hierarchy of Planck constants which has dramatic implications: consider only the stability of the qubits against thermal perturbations. Braided monopole flux tubes making possible topological quantum computation in turn stabilize the computations at the space-time level.
    2. There are also deep implications for the classical computation (see this, this, and also this). Classical computers could become conscious, intelligent entities in the TGD Universe if a quantum coherence time assignable to the computer exceeds the clock period. Also the entanglement of a living entity with a computer could make it a part of the living entity. Control of computers by living entities using a cnot- coupling, which makes possible counter teleportation, could make possible human-quantum computer interaction if ordinary computers can have quantum coherence in time scales longer than clock period (in principle possible in the TGD Universe!).

    As a matter of fact, there is evidence for the interaction between computers and living matter. A chicken gets marked to a robot and the behavior of the robot begins to correlate with that of the chicken! Maybe a cnot-coupling with the random number generator of the robot is involved! Here the TGD view of classical fields and long length scale quantum coherence associated with the classical electric and magnetic fields and gravitational fields might allow to understand what is involved (see this and this).

    1. The gravitational field of the Sun corresponds to gravitational Compton time of 50 Hz, average EEG frequency? Does this mean that we have already become entangled with our computers without realizing what has happened: who uses whom? The Earth's gravitational field corresponds to Compton frequency 67 GHz, a typical frequency for biomolecules. D The clock frequencies for the computers are approaching this limit.
    2. The analogous Compton frequencies for the electric fields of Sun and Earth (see this) are also highly interesting besides the cyclotron frequencies for monopole flux tubes, in particular for those carrying "endogenous" magnetic field of 2/5 BE= .2 Gauss postulated by Blackmann to explain his strange findings about the strange effects of ELF radiation at EEG frequencies on the vertebrate brain.

    For a summary of earlier postings see Latest progress in TGD.

    For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.

Monday, March 04, 2024

About the TGD counterpart of the inflationary cosmology

The question of Marko Manninen related to the inflation theory (see this) inspired the following considerations related to the TGD counterpart of the inflationary period assumed to precede the radiation dominated phase and to produce ordinary matter in the decay of inflaton fields.

Recall that inflation theory was motivated by several problems of the standard model of cosmology: the almost constancy of the temperature of the cosmic microwave background; the nearly flatness of 3-space implying in standard cosmology that the mass density is very nearly critical; and the empirical absence of magnetic monopoles predicted by GUTs. The proposal solving these problems was that the universe had critical mass density before the radiation dominated cosmology, which forced exponential expansion and that our observable Universe defined by the horizon radius corresponds to a single coherent region of 3-space.

The critical mass density was required by the model and exponential expansion implying approximate flatness. The almost constant microwave temperature would be due to the exponential decay of temperature gradients and diluted monopole density. The model also explained the temperature fluctuations as Gaussian fluctuations caused by the fluctuations of the mass density. The generation of matter from the decay of the energy density of vacuum assigned with the vacuum expectation values of inflaton fields was predicted to produce the ordinary matter. There was however also a very severe problem: the prediction of a multiverse: there would be an endless number of similar expanded coherence regions with different laws of physics.

A very brief summary of the TGD variant of this theory is in order before going into the details.

  1. The TGD view is based on a new space-time concept: space-time surfaces are at the fundamental level identified as 4-D surfaces in H=M4× CP2. They have rich topologies and they are of finite size. The Eisteinian space-time of general relativity as a small metric deformation of empty Minkowski space M4 is predicted at the long length scale limit as an effective description. TGD however predicts a rich spectrum of space-time topologies which mean deviation from the standard model in short scales and these have turned out to be essential not only for the understanding of primordial cosmology but also the formation of galaxies, stars and planets.
  2. In TGD, the role of inflaton fields decaying to ordinary matter is taken by what I call cosmic strings, which are 3-D extremely thin string-like objects of form X2× Y2⊂ M4× CP2, have a huge energy density (string tension) and decay to monopole flux tubes and liberate ordinary matter and dark matter in the process. That cosmic strings and monopole flux tubes form a "gas" in M4× CP2 solves the flatness problem: M4 is indeed flat!

    TGD also involves the number theoretic vision besides geometric vision: these visions are related by what I call M8-H duality, see for instance this, this and this for the odyssey leading to its recent dramatically simplified form this. The basic prediction is a hierarchy of Planck constants heff=nh0 labelling phases of ordinary matter behaving like dark matter: these phases explain missing baryonic matter whereas galactic dark matter corresponds to dark energy as the energy of monopole flux tubes.

    Quantum coherence becomes possible in arbitrarily long scales and in cosmic scales gravitational quantum coherence replaces the assumption that the observed universe corresponds to an exponentially expanding coherence region and saves it from the multiverse. This solves the problem due to the constancy of the CMB background temperature.

  3. In the TGD framework, cosmic strings thickened to monopole flux tubes are present in the later cosmology and would define the TGD counterpart of critical mass density in the inflationary cosmology but not at the level of space-time but in M4⊂ M4× CP2. The monopole flux tubes are always closed: this solves the problem posed by the magnetic monopoles in GUTs. Monopole flux tubes also explain the stability of long range magnetic fields, which are a mystery in standard cosmology even at the level of planets such as Earth.
  4. The fluctuations of CMB temperature would be due to the density fluctuations. In inflation theory they would correspond to the fluctuations of the inflaton field vacuum expectation values. In TGD, the density fluctuations would be associated with quantum criticality explaining the critical mass density ρcr. The fluctuations δ ρcr of the critical mass density for the monopole flux tubes would be due to the spectrum for the values of effective Planck constant heff: one would have δ T/T ∝ δ heff/heff. This would give a direct connection between cosmology and quantum biology where the phases with large heff are in a fundamental role.
See the article About the TGD counterpart of the inflationary cosmology or the chapter TGD and Cosmology.

For a summary of earlier postings see Latest progress in TGD.

For the lists of articles (most of them published in journals founded by Huping Hu) and books about TGD see this.