Jump to content

Wikipedia:Reference desk/Mathematics

From Wikipedia, the free encyclopedia

This is an old revision of this page, as edited by 116.86.4.41 (talk) at 09:32, 31 July 2022 (→‎Term for "lowest possible value above X, given available values to sum up"). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.

Welcome to the mathematics section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:

July 24

Confidence levels in Bayesian analysis

Hi! I've been editing some articles where there is new information from a radiocarbon dating study that uses Bayesian analysis in its dating model:

"Radiocarbon Dating of Oracle Bones of the Late Shang Period in Ancient China". Radiocarbon. 63 (1). Cambridge University Press: 155–175. 2020-10-20. Archived from the original on 2022-03-14. {{cite journal}}: Unknown parameter |authors= ignored (help)

The topic is Chinese oracle bones excavated from Yinxu, but the question I have pertains only to statistics and what can be said in Wikipedia's voice. Only pages 160–165, which include nearly two full pages on non-prose tables, are really relevant to my question. No knowledge of radiocarbon dating or early Chinese history or bronze age archaeology is required.

The cited study includes a lot of things I don't quite understand, like how it seems to be training the model with the same dataset it uses to test the model, removing from the dataset individual samples whose "agreement index" is unacceptably low (attributed to sample contamination, p 161), and how the overall agreement index exceeds 100% (p 165), but I can only assume that since the study has been published in a peer reviewed journal and conducted by professionals who know what they're doing, that these are all acceptable methods and outcomes. (There is one methodological issue, regarding the uncritical acceptance of the reign length of Wu Ding, that is outside of scope.)

My question relates to confidence levels and uncertainty as measured in absolute years.

The study states (p 165):

The calibrated age within the 68% range is assigned to every sample in the model, and then the range, which superposed the 68% ranges of all the samples in a phase, is taken as the 68% range of that phase. Actually, the probability of true age falling in 68% range is higher than 80% or even reach 90%.

In the article Wu Ding, I changed text stating the confidence level was "between 80 and 90 percent", to state that the confidence level "exceeded" 80, and introduced the ±10 year uncertainty, stated by the study's authors in numerous places, including contexts apparently outside the Wu Ding period samples specifically (pages 165 and 168). Later, @Fairnesscounts:, who astutely first cited the study in question in March, changed the text to read that the confidence level "corresponds to" an uncertainty of ±10 years.

My questions are these: given the quote above, does the math stand up? Can it really be said in Wikipedia's voice that the confidence level falls between 80 and 90 percent? Or even that it exceeds 80 percent? Or is the conservative number of 68% more appropriate? And does the confidence level "correspond to" an uncertainty of ±10 years, or is this a separate piece of information produced by the analysis? Folly Mox (talk) 16:41, 24 July 2022 (UTC)[reply]

The wording "the probability of true age falling in 68% range" makes some alarm bells go off – unless they mean, "among the samples for which the true age is known, the probability of their known true ages falling in 68% range". The writing is fuzzy; without thoroughly studying the details (which I don't feel like doing) I cannot work out what they mean – and not just in this sentence. The value 68% is of course quite arbitrary. I suggest keeping it at the level of detail of the article's abstract. One more thing, in the context of the statement, "Twenty-six oracle bone divinations of his reign have been radiocarbon dated to 1254 to 1197 BC" (a range of 57 years), who cares at this level what the confidence level is for a 20-year confidence interval? Why mention it at all?  --Lambiam 18:43, 24 July 2022 (UTC)[reply]
For those who haven't read the Wu Ding article, the idea is to determine when he was king of China from the radiocarbon dating of oracle bones that were used for divinations in his reign. Wu Ding, who ruled around 1200 BC, is the earliest Chinese ruler whose reign can be confirmed by contemporary material.
User Folly Mox added this to the article: "Radiocarbon dating on a sample of these oracle bones has yielded results closely according with dates derived from the literary record." What does this mean? There is no citation. Sima Qian's chronology goes back only to the Gonghe Regency of 841-828 BC. "For Xia, Shang, and Zhou Dynasties before 841 BC, the Shiji recorded only lists of kings with their genealogy, without the years of their reign," according to Liu's article.
Liu's article could certainly have used a better copy editor. But as I interpret it, it gives two alternative ways of expressing the margin of error. The first way is, "the probability of true age falling in [to the given date range] ...is higher than 80% or even reach 90%." The second way is, "the uncertainty of about 10 years possibly exists for the calibrated age of a phase." In Phase I, which corresponds to the reign of Wu Ding, 26 divinations were radiocarbon dated. Fairnesscounts (talk) 20:40, 24 July 2022 (UTC)[reply]
What is the relevance of the interval and the confidence level? Why use 10 years, why not 100 years? Then you have a 99.99% confidence level of something or other. I can do the maths for the given data, using as a model the sum of two real-valued random variables, one uniformly distributed over the interval [−1260, −1182] and the other having a normal distribution with μ = 0, σ = 10. Then I get 97.9%, but I do not know what it means in relation to the reign of Wu Ding. Also, this has no longer anything to do with Bayesian analysis.  --Lambiam 21:47, 24 July 2022 (UTC)[reply]
Lambiam, thank you for your directness! I haven't taken a statistics course since 2009 and I didn't know what importance to attach to the confidence level, but it appears the answer is: none at all. It sounds like including the ±10 year uncertainty should suffice, without mention of confidence levels?
Fairnesscounts, the text I added about the radiocarbon dates closely according with the literary record, apparently I should have been much clearer that I was referring to the dates given by the Cambridge History of Ancient China, based on work by Pankenier and Nivison amongst others, and relying on received texts such as the Yi Zhou Shu and epigraphic evidence such as the Li gui, which is cited in the first paragraph after the lead; and the results of the Xia–Shang–Zhou chronology project, whose methodology I'm unfamiliar with, also cited in the following paragraph. It's been my understanding of MOS:LEADCITE that facts cited in the body of the article do not require a separate citation in the lead, and I was making the assumption that readers could look at the 1189 end date, the 1250–1192 calculation, and the 1254–1197±10 results and conclude that they closely accord, without weighing down the lead paragraph with a bunch of numbers. Happy for a rewrite. Folly Mox (talk) 00:19, 25 July 2022 (UTC)[reply]
Their (Liu et al) simulations suggested the calibration has the possible uncertainty of 10 years ("and for an individual boundary ... up to 20 years in an extreme case"). Nevertheless, it is because their calculated probabilities of the true age falling within one standard deviation of the calibration is so high (<80 - 90%) that they only presented the 1σ results. Be warned I don't know much applied statistics, much less the software, and I can't see from the documentation why the agreement indices are so much greater than 1. But as you can no doubt infer the agreement indices are not the same as the probability of the measurement agreeing with the model, which they seem to not provide for each sample -- I'm not sure if that's in a supplemental dataset or if that is something that's supposed to be derived from what's given.
Regardless, my limited understanding of radiocarbon dating is that the results of any individual study should be reported with great caution, and it's especially recommended to report details only from reviews and metastudies, and only provide bare-bones general summaries of the more recent primary literature, if you cover it at all. You'd know the state of the scholarship of this field better than I would of course. SamuelRiv (talk) 22:46, 24 July 2022 (UTC)[reply]
I must thank Fairnesscounts for finding the study in the first place: I'm not at all familiar with radiocarbon dating or the state of the field. My edits have been an attempt to urge caution in overspecifying our understanding of the dating of Shang dynasty monarchs based on this study. Folly Mox (talk) 00:19, 25 July 2022 (UTC)[reply]

The paper has a number of odd features.

Paleographers studying oracle bone inscriptions from the late Shang period have divided them on the basis of internal evidence into five periods (called phases in the paper), each corresponding to the reign of one or two kings, e.g. period I corresponds to the reign of Wu Ding, while each of the others corresponds to two kings. The paper aims to assign absolute dates to these periods, and thus to reigns.

The method was to perform C14 dating on samples of bones that had each been assigned a period on textual grounds. Here are the 68% confidence intervals for individual samples (Table 3):

−1260
−1240
−1220
−1200
−1180
−1160
−1140
−1120
−1100
−1080
−1060
−1040
−1020

Periods:   period I,   period II,   period III,   period IV,   period V.

It is surprising that these intervals (especially periods II-IV) cluster so neatly: one might expect the midpoints to range continuously across the interval. Part of the explanation is that the authors discarded about a third of their results due to having too low an "agreement index", presumably between the dating and the expected period sequence (bottom of p161).

They them "superpose" these 68% confidence intervals for individual samples to obtain what they call a 68% range for each period (Table 2):

Period I (Wu Ding)
Period II (Zu Geng and Zu Jia)
Period III (Lin Xin and Kang Ding)
Period IV (Wu Yi and Wen Wu Ding)
Period V
−1260
−1240
−1220
−1200
−1180
−1160
−1140
−1120
−1100
−1080
−1060
−1040
−1020

For example, period I, and thus the reign of Wu Ding, is assigned the range 1254–1197 BC, the union of the 68% ranges for the samples assigned to period I. The authors state that the period boundaries have an uncertainty of about 10 years and up to 20 years in extreme cases, and that "the probability of true age falling in 68% range is higher than 80% or even reach 90%". I don't see how these numbers follow from the methodology, and as User talk:Lambiam says, the focus on 68% intervals cannot be justified.

I agree with User:SamuelRiv that we should be very cautious about reporting an individual study of this nature. Kanguole 10:40, 26 July 2022 (UTC)[reply]

In assigning an estimated numerical interval Lo–Hi to a historical period P, there are at least three different kinds of association one may seek to assert: (1) P is contained in [Lo, Hi]; (2) [Lo, Hi] is contained in P; (3) neither is (necessarily) contained in the other, but there is some overlap between the two. Each kind requires different methods for determining confidence intervals, given a desired confidence level. Which of the three kinds are the authors of the study seeking to assert? It is not clear to me. A methodological issue: discrepancies between ages obtained by dating and true ages may be systematic across specimens from the same site and period, so they may not be assumed to be statistically independent.  --Lambiam 11:41, 26 July 2022 (UTC)[reply]

July 27

Categories, Rings, and Fields (oh my!)

I'm taking a break from my normal work to just indulge in some abstract math that is way over my head but I just love even trying to understand 1% of it. I've been interested in Category Theory since the 90's when Object-Oriented Programming was still a new thing in Computer Science (my real job) and some guys I respected immensely in the Formal Methods community thought Category Theory was THE answer as to how to define specifications for OOP (as well as the answer to the meaning of life the universe and everything... exaggerating but just a bit, these were guys who never got outwardly excited about anything and they behaved like my daughter at a Pink concert when you mentioned Category Theory). I'm just trying to learn the very basics so I always start with subsumption hierarchies. One of the things that always confused me as I started reading about CT was what is a category compared to a group, a field, etc. After reading the opening paragraphs of the various articles I've come up with the following hierarchy:

 Category
   Set
     Ring
       Field

I.e., all Fields are Rings, all Rings are Sets and all Sets are Categories. Correct so far? If yes then what else is a Category that isn't a set?

Also, the article says that Category theory is an alternative to Set theory as the foundation for math. Does that mean that Category Theory and Set Theory essentially are equivalent? My understanding is that Intuitionist Logic is also an alternative to ZFC Set Theory but that someone proved that the two were equivalent. I.e., anything you can prove in one you could prove in the other and vice versa. Assuming I'm correct about Intuitionist Logic and ZFC is the same true for Category Theory and ZFC? Or are there some things that can be proven with Category Theory but not with ZFC, or vice versa?

Any feedback would be appreciated. 03:20, 27 July 2022 (UTC) MadScientistX11 (talk) 03:20, 27 July 2022 (UTC)[reply]

In the classic definition, fields are algebraic structures with the same signature as rings, but satisfying stronger axioms. This implies that every field is also a ring. Every ring is a monoid in two different ways. These algebraic structures can be formally defined as a triple (Σ, Ω, Ξ), in which Σ is a set (the "carrier"), Ω is a signature over the carrier, and Ξ is the set of axioms. By dropping the last two of these components, we get a set, and that is what people would normally mean by saying that a field is (also) a set. But this is not entirely trivial. Each of the three components is a set, so in coercing (Σ, Ω, Ξ) to a set, why is that set Σ and not Ξ? This is a matter of convention, and not in any way forced by more fundamental considerations.
Sets are not obviously categories. There are several ways to construct a category C from a given set Σ. The simplest way is to take the discrete category over Σ:
ob(C)    = Σ;
hom(C) = {1xxΣ} .
Another way is to use the free monoid over Σ as the hom-set of a one-object category.
There are several ways to formalize set theory and several ways to formalize category theory. In general, the resulting theories are not formally comparable. There is no unique way to position category theory as a foundation of mathematics. Many category theorists are happy enough with ZFC, or, if their sets are getting too large for ZFC, NBG set theory. Inasmuch as one attempts to use topos theory as a foundation, one tends to end up with constructive mathematics, which some think is a Good Thing, but others find too limiting. The latest along these lines is homotopy type theory; the verdict on its touted suitability as a foundation is not yet in.  --Lambiam 08:34, 27 July 2022 (UTC)[reply]
@Lambiam: Great feedback thanks. To be honest I didn't follow all of it but it definitely helped. One last thing, I found two books on the Internet for free (and I'm pretty sure these are legitimate free copies not bootlegs). One is Basic Category Theory by Tom Leinstar and the other is Group Theory by J.S. Milne (actually I just realized I need to figure out how Groups fit in here). I thought I would start with the Group Theory book. I doubt I'll finish it but at least the chapters that go over the basic ideas. Any other suggestions regarding good books on these topics? Thanks again for taking the time to explain. MadScientistX11 (talk) 13:21, 27 July 2022 (UTC)[reply]
You might start with having a look at lattice theory. Most of the concrete examples of lattices (in the section Lattice (order)#Examples) are elementary and should be familiar. Since a lattice is (equivalent to) a partial order, and therefore also a preorder, it immediately gives rise to a category (see the third paragraph of Category (mathematics) § Examples). If you are familiar with Haskell notation, Category Theory for Programmers may be a good introduction. I have only glanced at it, but here is a positive review that also tells us there is a series of video lessons.  --Lambiam 15:39, 27 July 2022 (UTC)[reply]
Just to add, intuitionistic logic and ZFC are not equivalent as you describe. It's a type mismatch to try to compare them. Intuitionistic logic is a type of logic, usually contrasted with classical logic. ZFC is a set of axioms, which is a different sort of thing, and those axioms are generally interpreted in classical logic.2406:E003:812:5001:844C:EAEC:51CE:B55C (talk) 13:23, 27 July 2022 (UTC)[reply]
An example highlighting the difference is the axiom of (classical) propositional logic known as the law of excluded middle, in symbolic form P ∨ ¬ P, which asserts that every proposition is either true or false. When working in ZFC, this is assumed to be valid. But it is not an axiom or theorem of intuitionistic logic. The same holds for the axiom or theorem (¬ ¬ P) → P. In intuitionistic logic, the two negations do not cancel.  --Lambiam 15:55, 27 July 2022 (UTC)[reply]

Balanced subset of Polytope vertices (with more than two elements)

Let a Balanced subset of Polytope vertices be a subset of the vertices where the vectors from the center of the polytope to the elements of the subset sum to zero.

The subsets that have more than two elements and are not made of the union of multiple two elements subsets.

  • The tetrahedron has no balanced subset.
  • The cube has two balanced three element subsets which are the two "demi-cubes" of the alternating vertices .
  • The Octahedron has no balanced subset that isn't two elements or a union of two elements.
  • The 12 vertices of the Icosahedron, I'm not sure whether the six element subsets defined by one vertex and then the ones which are an edge distance of two (forming the narrower of the two Pentagonal pyramids) are a balanced subset.
  • The 20 vertices of the Dodecahedron have balanced subsets of the ten two element subsets and the five four element subsets corresponding to the Compound of five tetrahedra, not sure if they have more...

(I think this can be extended to the regular 4-dimensional polytopes, but I'm not sure how.Naraht (talk) 13:28, 27 July 2022 (UTC)[reply]

By "subset" you appear to mean a non-empty proper subset. And I guess you are interested specifically in regular polytopes.  --Lambiam 16:01, 27 July 2022 (UTC)[reply]
I figure to start with the Regular Polytopes.Naraht (talk) 12:36, 28 July 2022 (UTC)[reply]
The question can also be posed for two-dimensional regular polytopes. It seems that the regular n-gon has a non-trivial balanced vertex set if and only if n is composite. Presumably (if true) there is a simple proof, but I don't see it.  --Lambiam 16:14, 27 July 2022 (UTC)[reply]
I think you're right. Let r=e2πi/n. Any set of vertices adding to zero would imply an equation of the form 1+rk1+...+rkm =0 with km<n. But the minimum polynomial of r is 1+r+...+rn-1 if n is prime. --RDBury (talk) 11:57, 28 July 2022 (UTC)[reply]
Nice to see this proved, it seemed insinctively true.Naraht (talk) 12:36, 28 July 2022 (UTC)[reply]
The Cartesian coordinates of the 12 vertices of a regular icosahedron with an edge length of 2, centred at the origin and oriented along the coordinate axes, are:
where is the golden ratio. (Each line represents four vertices.) To be balanced, any vertex involving a must be counterbalanced by one having in the same position. Likewise for and . So, for any vertex in a balanced set, the set also contains the antipodal vertex. A balanced vertex set is therefore a set of antipodal pairs, and any such set is balanced.  --Lambiam 15:54, 28 July 2022 (UTC)[reply]
Nice. If a balanced set contains a balanced proper subset, then the difference is also balanced. So the problem becomes to find the minimal balanced sets. You can then worry about finding disjoint sets of balanced sets. A similar analysis proves that the minimal balanced sets of an octahedron are antipodal pairs. (This is basically the content of bullet 3 above.) The minimal sets of a cube are the four pairs of antipodal points and the two tetrahedra of alternating vertices. (This is the content of bullet 2 above.) That leaves the dodecahedron, but it has at least the 10 pairs of antipodal points and 10 tetrahedra. For higher dimensions, for the regular n-simplex there is only the simplex itself, and for the n-cross polytope there are the n pairs of antipodal points. The n-cube seems tricky, the 24-,120- and 600-cell as well. --RDBury (talk) 23:03, 28 July 2022 (UTC)[reply]
PS. There is a minimal balanced set for the 4-cube of size 4. In fact there are at least 12 of them. Let the vertices be (±1, ±1, ±1, ±1). The plane x1=x2 intersects the 4-cube in 8-points that form a rectangular solid with sides 2, 2, 2√2. The two alternating tetrahedra of this solid are minimal balanced sets. There are 5 additional similar planes you can start with, with makes a total of 12 sets. Each tetrahedron has sides of length 2√2, 2√2, 2√2, 2√2, 2√3, 2√3, so they are not regular. This can be generalized to n-cubes for larger n as well. For the 5-cube I get 50 (non-regular) tetrahedrons in 2 shapes. I don't know if there are any more minimal balanced sets for n-cubes; I don't even want to hazard a guess. --RDBury (talk) 00:42, 29 July 2022 (UTC)[reply]
PPS. There is a minimal balanced set for the 5-cube of size 6: (1, 1, 1, 1, 1), (1, 1, -1, -1, -1), ( -1, 1, 1, -1,-1), ( -1, -1, 1, 1,-1), (-1, -1, -1, 1, 1), (1, -1, -1, -1, 1). This one doesn't seem to have an easy geometrical interpretation though. It looks like there are ways to generalize to higher dimensions. --RDBury (talk) 01:54, 29 July 2022 (UTC)[reply]
If I'm interpreting the problem correctly, I think any minimal balanced set of size k for an n-cube can generate a minimal balanced set of equal size for all larger cubes by just alternating appending 1 and -1 to the beginning of each set to get (note that by default, k is even.) First, a set constructed this way is clearly balanced. And if we have some balanced subset of this new balanced set, say which sum to 0, then are a balanced subset of the n-cube minimal balanced set that we generated the new set from, and thus by minimality must be the minimal balanced set itself, meaning that the balanced (n+1)-cube set is minimal. This approach should actually yield minimal balanced sets of size 4 for all n-cubes with n at least 3 (as per the tetrahedra mentioned in your comment three levels above) and size 6 for all n-cubes with n at least 5 (as per the comment this is in reply to.) GalacticShoe (talk) 06:27, 29 July 2022 (UTC)[reply]
I don't see why the sets and are necessarily disjoint. Even if they are, why should their union be a minimal balanced set? Note that the cardinality of these vertex sets (if disjoint) remains the same after projection from to Also, there is no such things as the n-cube minimal set.  --Lambiam 10:47, 29 July 2022 (UTC)[reply]
Whoops, communicatory error on my part, forgot that n is already used for the dimension of the cube. I'll rewrite my argument to make it clearer. GalacticShoe (talk) 00:14, 30 July 2022 (UTC)[reply]
That's a very interesting construction. In terms of the number of possible balanced sets of a given size, it drastically increases the number of possibilities. For sets with 4 elements, there are 6 ways of choosing the 1's and -1's for a new coordinate, 36 ways to add two new coordinates, etc. That means the number of minimal balanced sets of size 4 for the n-cube is at least on the order of 6n; I was thinking more like 3n. It should be feasible to compute the exact number for sets with 4 elements as an application of Burnside's lemma. For sets with a larger number of elements Burnside gets harder and harder to use; instead of a group with 24 elements you're looking at a set with k! elements where k is the number of elements. The problem with this approach is that while it might tell you that the number of sets is >0, it won't tell you how to actually construct one. --RDBury (talk) 14:36, 30 July 2022 (UTC)[reply]
PS. According to my calculations, which were a bit more tricky than I was expecting, the number of minimal balanced sets with 4 elements in the n-cube is:
This gives the value 0 for n=1 and 2, and 2 for n=3 (as expected). Pulling out a spreadsheet, the next few values are 24, 200, 1440, 9632 .... This showed up on OEIS as twice sequence A016283. There is a different combinatorial interpretation given there as well. --RDBury (talk) 15:45, 30 July 2022 (UTC)[reply]
Real quick, I'd like to note that the OEIS sequence is shifted over by one, in the sense that if we denote the number of minimal balanced 4-sets in the n-cube as b(n), then b(n) = 2a(n-1). I imagine there must be some nice one-to-two mapping from rectangles in (n-1)-cubes to minimal balanced sets in n-cubes, but as it stands I'm not sure what that mapping would be. GalacticShoe (talk) 20:11, 30 July 2022 (UTC)[reply]

Term for "lowest possible value above X, given available values to sum up"

Sometimes when I'm online shopping I like to see how low I can get the total price to be while still being above the minimum for free shipping, with the right combination of items. I think there's a mathematical term for a value that fulfills this criterion, or the process of trying to add up available values to get to it, but I can't remember it. What word or concept am I looking for? 24.43.123.79 (talk) 18:01, 27 July 2022 (UTC)[reply]

I'd say, "(try to) reach the threshold for free shipping", like foolishgrunt said here. This is not specifically mathematical terminology; "reach the threshold" is used in other contexts as well.[1][2][3] By the way, if a web shop states "free shipping for orders over $50", this threshold is, strictly speaking, $50.01. Most web shops will write "free shipping for orders of $50 or more", or "for orders of at least $50", and then the threshold is $50.00.  --Lambiam 18:59, 27 July 2022 (UTC)[reply]
The term is "least upper bound", or supremum.
As a distracting side note, keep in mind that if the partially ordered set P were in this case to represent something like "all goods available currently to purchase on Amazon", then it's a finite set and the possible subset(s) of items in your cart S can be exactly determined based on the conditions you described, and the supremum of S ($25 to get free shipping, say) must be in the possible S.
But if, on the other hand (and I think I need backup on this because I never took measure theory), you were to define P as something like "all goods potentially/hypothetically available on Amazon ever," then P might not have an upper bound, and if in this hypothetical eternity the laws restricting consumer pricing were also infinitely flexible, then if the minimum threshold price is something one must exceed and not merely meet (and thus the maximum price of a single item in your cart for the cart to meet the minimum threshold is infinitesimally above $25), you would have an essential supremum of $25. SamuelRiv (talk) 19:09, 27 July 2022 (UTC)[reply]
This is very similar to the knapsack problem, (finding the greatest possible value below X, given available values to sum up) catslash (talk) 00:55, 28 July 2022 (UTC)[reply]
The subset sum problem is even closer. --116.86.4.41 (talk) 09:32, 31 July 2022 (UTC)[reply]

July 30

Prove trigonometric identity

Prove that- ( cos2(x) + 4sin(x)-1 ) / ( cos2x+5sin(x)-5 ) = ( sin2(x) + sin(x) ) / ( -cos2x ) . ExclusiveHerausgeber Notify Me! 14:34, 30 July 2022 (UTC)[reply]

This looks like a homework problem, but to get you started: Clear the fractions by multiplying both sides by the denominators. Multiply everything out, then it's a matter of repeatedly applying cos2x+sin2x=1. It is, of course, false, and therefore difficult to prove, if either of the denominators is 0. --RDBury (talk) 14:51, 30 July 2022 (UTC)[reply]

Plotting the output of two functions against each other in a 2D graph

This question is more technical than mathematical. I have two formulas, say (y z)/(x y) and ((y z)/(x + y - y z))/(x y) (for x, y, z in [0,1]). How do I represent the relation between the two as a line graph, such that the x axis represents the first formula and the y axis the second? (Say with both axes ranging from 0 to 10.) Can this perhaps be done with WolframAlpha? --Cubefox (talk) 23:13, 30 July 2022 (UTC)[reply]

Unless one of your formulas is injective, you generally don't expect there to be a clean relationship which can be graphed like this. You can make a scatter plot, though. I plotted the example you gave on Desmos using the command .--2406:E003:812:5001:9D4:C84F:FDB4:AA8B (talk) 00:42, 31 July 2022 (UTC)[reply]
Where they are defined, these formulas define continuous mappings from to from a higher to a lower dimension. By Brouwer's theorem of the invariance of dimension, such mappings cannot be injective.  --Lambiam 06:49, 31 July 2022 (UTC)[reply]

July 31