Wikipedia:Reference desk/Science

This is an old revision of this page, as edited by 71.146.8.88 (talk) at 01:45, 18 March 2012. The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.


Latest comment: 12 years ago by Protein Chemist in topic Hydrogen scattering length density?
Welcome to the science section
of the Wikipedia reference desk.
Select a section:
Want a faster answer?

Main page: Help searching Wikipedia

   

How can I get my question answered?

  • Select the section of the desk that best fits the general topic of your question (see the navigation column to the right).
  • Post your question to only one section, providing a short header that gives the topic of your question.
  • Type '~~~~' (that is, four tilde characters) at the end – this signs and dates your contribution so we know who wrote what and when.
  • Don't post personal contact information – it will be removed. Any answers will be provided here.
  • Please be as specific as possible, and include all relevant context – the usefulness of answers may depend on the context.
  • Note:
    • We don't answer (and may remove) questions that require medical diagnosis or legal advice.
    • We don't answer requests for opinions, predictions or debate.
    • We don't do your homework for you, though we'll help you past the stuck point.
    • We don't conduct original research or provide a free source of ideas, but we'll help you find information you need.



How do I answer a question?

Main page: Wikipedia:Reference desk/Guidelines

  • The best answers address the question directly, and back up facts with wikilinks and links to sources. Do not edit others' comments and do not give any medical or legal advice.
See also:


March 14

NIST aluminium ion clock

I was watching a popular science program on TV and it said that the aluminum ion experimental clock at the National Institute of Standards and Technology is the world's most precise clock, and is accurate to one second in about 3.7 billion years.

What do they mean by that? If I say my watch is accurate to 1 second a day I mean that it gains or loses no more than 1 second a day relative to GMT or some other standard. In other words, accuracy can only be measured relative to a standard.

But if the NIST clock is truly the most accurate then it is the standard, since there is nothing more accurate to compare it to. In effect, the NIST clock defines time. To check its accuracy you would have to measure 3.7 billion years by some other, more accurate, means, which contradicts the premise.

So, what do they mean by saying it’s the most precise clock? — Preceding unsigned comment added by Callerman (talkcontribs) 00:37, 14 March 2012 (UTC)Reply

They are translating the clock resonator's Q factor, which is a very technical measurement (of phase noise, or frequency stability), into "layman's terms." Expressing the frequency stability in "seconds of drift per billion years" is a technically correct, but altogether meaningless, unit conversion. Over the time-span of a billion years, it's probable that the Q-factor will not actually remain constant. It's similar to expressing the speed of a car in earth-radii-per-millenia, instead of miles-per-hour. The math works out; the units are physically valid and dimensionally correct ([1]); but we all know that a car runs out of gas before it reaches one earth-radius, so we don't use such silly units to measure speed. Similarly, physicists don't usually measure clock stability in "seconds drift per billion years" in practice.
Here's some more detail from NIST's website: How do clocks work? "The best cesium oscillators (such as NIST-F1) can produce frequency with an uncertainty of about 3 x 10-16, which translates to a time error of about 0.03 nanoseconds per day, or about one second in 100 million years." And, they link to "From Sundials to Atomic Clocks," a free book for a general audience that explains how atomic clocks work. Chapter 4 is all about Q factor: what it is, why we use it to measure clocks, and how good a Q we can build using different materials. Nimur (talk) 01:04, 14 March 2012 (UTC)Reply
Alright Nimur, a question to your answer so you know you're not off the hook yet. Does anyone in science or society-at-large benefit from the construction of a clock that is more accurate than 0.03 nanoseconds per day, or is this an intellectual circle jerk? (which is also fine, by the way, because science is awesome.) Someguy1221 (talk) 01:56, 14 March 2012 (UTC)Reply
Indeed, there are practical applications. If I may quote myself, from a seemingly unrelated question about metronome oscillations, in May 2011: "This "theoretical academic exercise" is the fundamental science behind one of the most important engineering accomplishments of the last century: the ultra-precise phase-locked loop, which enables high-speed digital circuitry (such as what you will find inside a computer, an atomic clock, a GPS unit, a cellular radio-telephone, ...)." In short, yes - you directly benefit from the science and technology of very precise clocks - they enable all sorts of technology that you use in your daily activities. The best examples of this would be high-speed digital telecommunication devices - especially high frequency wireless devices. A stable oscillator, made possible by a very accurate clock, enables better signal reception, more dense data on a shared channel, and more reliable communication. Nimur (talk) 06:43, 14 March 2012 (UTC)Reply
Nimur has not correctly understood the relationship between Q and stability. Q is a measure of sharpness of resonance. If you suspend a thin wooden beam between two fixed points and hit it, it will vibrate - ie it resonates. But the vibrations quickly die away, because wood is not a good elastic material - it has internal friction losses. If you use a thin steel beam, the vibrations die away only slowly - it is a better elastic material. Engineers would say the wood is a low Q material and the steel a High Q material. All other things equal, a high-Q resonator will give a more stable oscillation rate, because if other factors try to change the oscillation rate the high-Q resonator will resist the change better. But there are other things and they aren't generally equal. Often a high-Q material expands with temperature - then the rate depends on temperature no matter how high the Q is.
In the real world (ie consumers and industry, as distinct from estoric research in university labs) the benefit of precise clocks is in telecommunications - high performance digital transmission requires precise timing to nanosecond standards, and in metrology - Time is one of the 3 basic quantities [Mass, Length, Time] that all practical measurements of any quantity are traceable back to. Precise timing is also the basis of navigation - GPS is based on very precise clocks in each satelite. So folks like the NIST are always striving to make ever more precise clocks so they DO have a reference standard to which they can check ever better clocks used in indutry etc.
It is quite valid to state the accuracy of clocks as soo many nanoseconds error per year or seconds per thousand years or whatever. It's often done that way because it gives you a good feel for the numbers. You don't need to measure for a year or 100 years to know. Here's an analogy: When I was in high school, we had a "rocket club". A few of us students, under the guidance of the science teacher made small rockets that we launched from the school cricket pitch. We measured the speed and proudly informed everyone - the best did about 300 km per hour. That does not mean our rockets burned for a whole hour and went 300 Km up. They only burned for seconds and achieved an altitude of around 400 m, but we timed the rockets passing two heights and did the math to get km/hour. If we told our girlfriends that the rockets did 80 m/sec, that doesn't mean much to them, but in km/hr they can compare it with things they know, like cars. Keit120.145.30.124 (talk) 03:02, 14 March 2012 (UTC)Reply
I respectfully assert that I do indeed have a thorough understanding of Q-factor as it relates to resonance and oscillator frequency stability. I hope that if any part of my post was unclear, the misunderstanding could be clarified by checking the references I posted. Perhaps you misunderstood my use of frequency stability for system stability in general? These are different concepts. Q-factor of an oscillator directly corresponds to its frequency stability, but may have no connection whatsoever to the stability of the oscillation amplitude in a complex system. Nimur (talk) 06:59, 14 March 2012 (UTC)Reply
Does "Q-factor of an oscillator directly correspond to its frequency stability" as you stated? No, certainly not. An oscillator fundamentally consists of two things: a) a resonant device or circuit and b) an amplifier in a feedback connection that "tickles" the resonant device/circuit to make up for the inevitable energy losses. This has important implications. First, you can have a high Q but at the same time the resonant device can be temperature dependent. As I said above it doesn't matter what the Q is, if the resonant device is temperature sensitive, then the oscillation frequency/clock rate will vary with temperature. Same with resonant device aging - quartz crystal and tuning forks can have a very high Q, but still be subject to significant aging - the oscillation rate varies more the longer you leave the system running. Second, the necessary feedback amplifier can have its own non-level response to frequency. This non-level response combines with the resonant device response to in effect "pull" the resonance off the nominal frequency. Real amplifiers all have a certain degree of aging and temperature dependence in teir non-level response. Also, practical amplifiers can exhibit "popcorn effect" - their characteristics occaisonally jump very slightly in value. When you get stability down below parts per 10^8, this can be important. All this means that it HELPS to have high-Q (it make "pulling" less significant), but you CAN have high stability with low Q (if the amplifier is carefully built and the resonant device has a low temperature coefficient), and you can have rotten stability with very high Q. I've not discussed amplitude stability in either of my posts, as this has little or no relavence to the discussion. Keit120.145.166.92 (talk) 12:16, 14 March 2012 (UTC)Reply
Several sources disagree with you; Q is a measure of frequency stability. Frequency Stability, First Course in Electronics (Khan et al., 2006). Mechatronics (Alciatore & Histand), in the chapter on System Response. Our article, Explanation of Q factor. These are just the few texts I have on hand at the moment. I also linked to the NIST textbook above. Would you like a few more references? I'm absolutely certain this is described in gory detail in Horowitz & Hill. I'm certain I have at least one of each: a mechanical engineering text, and a physics textbook, and a control theory textbook, on my bookshelf at home, from each I can look up the "oscillators" chapter and cite a line at you, if you would like to continue making unfounded assertions. Frequency stability is defined in terms of Q. Q-factor directly corresponds to frequency stability. Nimur (talk) 18:34, 14 March 2012 (UTC)Reply
(1) If you read the first reference you cited (Khan) carefully it says with math the same as I did in just words: High Q helps but is not the whole story. It says high Q helps, but for any change, there must be a change initiator - so if there is no iniator, there's no need for high Q. Nowhere does Khan say stability directly relates to Q. In fact, his math shows where one of the other factors gets in, and offers a clue on another. (2) I don't have a copy of you 2nd citation, so I can't comment on it. (3) The Wikipedia article does say "High Q oscillators ... are more stable" but this is misleading, as Q is only one of many factors. (4) I don't think you'll find anywhere in H&H where it says Q determines frequency stability. With respect to you good self Nimur, you seem to be making 3 common errors: a) you are reading into texts what you want to believe, rather than reading carefuly, (b) like many, you cite Wikipedea articles as an authority. That's not what Wikepedia is for - the articles are good food for thought and hints on where to look and what questions to ask, but are not necessarily accurate. c) you haven't recognised that Khan, as a first course presentation, gives a simplified story that, while correct in what it says, doesn't not cover all the details. Rather than dig up more books, read carefully what I said, then go back to the books you've already cited.
A couple of examples: Wein bridge RC oscillator - Q is 0.3, extreemly low, but with carefull amplifier design temperature stability can approach 1 part in 10^5 over a 30C range, 1 part in 10^4 is easy. 2nd example: I coulkd make up an LC oscillator with the inductor a high Q device (say 400) and the C a varicap diode (Q up to 200). The combined in-circuit Q can be around 180. That should give a frequency stability much much better that the Wein oscillator with its Q of only 0.3. But wait! sneaky Keit decided to bias the varicap from a voltage derived from a battery, plus a random noise source, plus a temperature transducer, all summed together. So, frequency tempco = as large as you like, say 90% change over 30 C, aging is dreadfull (as the battery slowly goes flat), and the thing randomly varies its frequency all over the place. I can't think why you would do this in practice, but it does CLEARLY illustrate that, while it HELPS to have high Q, Q does NOT directly correspond to frequency stability lots of other factors can and do affect it. Keit121.221.82.58 (talk) 01:25, 15 March 2012 (UTC)Reply
Keit, your unreferenced verbage is no more than pointless pendantry. And nobody believes you had a girlfriend in high school either. — Preceding unsigned comment added by 69.246.200.56 (talk) 01:57, 15 March 2012 (UTC)Reply
What Keit is saying makes sense to me. According to our article, "[a] pendulum suspended from a high-quality bearing, oscillating in air, has a high Q"—but obviously a clock based on that pendulum will keep terrible time on board a ship, and even on land its accuracy will depend on the frequency of earthquakes, trucks driving by, etc., none of which figure into the Q factor. If you estimate the frequency stability of an oscillator based on the Q factor alone, you're implicitly assuming that it's immune to, or can be shielded from, all external influences. I'm sure the people who analyze the stability of atomic clocks consider all possible external perturbations (everything from magnetic fields to gravitational waves) in the analysis. Those influences may turn out to be negligible, but you still have to consider them.
Also, it seems as though no one has really addressed the original question, which is "one second per 3.7 billion years relative to what?". The answer is "relative to the perfect mathematical time that shows up in the theory that's used to model the clock", so to a large extent it's a statement about our confidence in the theory. I don't know enough about atomic clocks to say more than that, but it may be that the main contributor to the inaccuracy is Heisenberg's uncertainty principle, in which case we're entirely justified in saying "this output of this device is uncertain, and we know exactly how uncertain it is." -- BenRG (talk) 22:59, 15 March 2012 (UTC)Reply
I'm afraid I don't think anyone has addressed my original question. Answers in terms of frequency stability, etc., do not seem to work. How do you know a frequency is stable unless you have something more stable to measure it against? How do you measure a deviation except with another, more accurate clock? But if this is the most accurate clock, what is there to compare it against? Just looking at my watch alone, for example, if I am in a room with no other clocks and no view of the outside daylight, I cannot say whether it is running fast or slow. I can only tell that by comparing it with something which I assume is more reliable. — Preceding unsigned comment added by Callerman (talkcontribs) 06:32, 16 March 2012 (UTC)Reply
I think BenRG has answered your question, but I'll see if I can help by expanding on it a bit. If you are in a closed room with only one watch, and you don't know what's inside the watch, then yes, you can't tell if its keeping correct time or not. But if you have two or more watches made by the same process, and you understand the process, you can reduce the risk of either or both keeping incorrect time. And if you have two or more watches each working in a different way, you can do better still, even if each watch is of different accuracy. That is, it is possible to use a clock of lesser (only a little) accuracy to prove the accuracy of the better clock, to a (imperfect) level of confidence. This is counter-intuitive, so I'll explain.
Any metrology system, including precision clocks, has errors that fall into 2 classes: Systematic Error (http://en.wikipedia.org/wiki/Systematic_error), and Random Error. Systematic errors are deterministic/consistent errors implicit in the system. If you 100% understand the system (how it works), you can analyse, correct for, reduce, and test for such errors. For example, a clock may have a consistent error determined by temperature. By holding one clock constant temperature, we can (say) test second clock over (say) over 10 C range, comparing it with the first. If the second changes (say) 1 part in 10^8 compared to the const temp clock, we could reasonably infer that it will stay within 1 part in 10^9 if kept within 1 C of a convenient temperature. The trouble is, there may be a systematic error you didn't think of - you can't know everything. The risk is reduced (but certainly not eliminated) if you have two or more clocks working on entirely different principles of roughly similar performance. Random errors (eg errors due to electrical noise, hiesenburg uncertainty etc) are easy to deal with. One builds as many clocks as is convenient, and keeps a record of the variation of each with respect to the average of all of them. There is a branch of statistics (n-sigma analysis, and control charts/shewart charts see http://en.wikipedia.org/wiki/Control_chart) for handling this. After a period of time, the degree of random error, the "accurate" mean, and any clock that is in error due to a manufacturing defect (even tiny errors), will emerge out of the statistical "noise".
It's quite true to say that, at the end of the day, how long a second is, is not decided by natural phenomena but by arbitary human decision. From time to time the standard second is defined in terms of a tested best avaliable clock. As better clocks get built, we can confine the error with respect to the declared standard to closer and closer limits, but the length of the second is whatever the standards folks declare it to be.
Perhaps my explanation will anoy Nimur and some turkey who has a problem with girls, but I hope it satifies the OP. Essentially I'm saying much the same as BenRg - as folks build better and better clocks, they have better and better error confidence, based on both theory and testing multiple examples of a new clock against previously built clocks that are nearly as good. But the duration of the standard second is arbitary. Keit124.178.61.36 (talk) 07:33, 16 March 2012 (UTC)Reply
Thanks for taking the trouble to explain. I think I understand, at least in general if not in the detail. — Preceding unsigned comment added by Callerman (talkcontribs) 02:23, 17 March 2012 (UTC)Reply

Hospital de Sant Pau Barcelona Spain.

Is this hospital open for medical care to patients today? 201271.142.130.132 (talk) 03:53, 14 March 2012 (UTC)Reply

Have you seen that we have an article on Hospital de Sant Pau? it seems to suggest that it ceased being a hospital in june 2009. Vespine (talk) 04:04, 14 March 2012 (UTC)Reply

Mean Electrical Vector of the Heart

Hello. When would one drop perpendiculars from both lead I (magnitude: algebraic sum of the QRS complex of lead I) and lead III (magnitude: algebraic sum of the QRS complex of lead III), and draw a vector from the centre of the hexaxial reference system to the point of intersection of the perpendiculars to find the mean electrical vector? Sources are telling me to drop a perpendicular from the lead with the smallest net QRS amplitude. Thanks in advance. --Mayfare (talk) 04:46, 14 March 2012 (UTC)Reply

I don't have medical training, so I can only guess, but if I understand correctly:
  • the contraction of different parts of the heart have accompanying electrical signals that move in the same direction as the contraction. Movement towards one of the electrodes of an electrode pair will give a positive or negative signal, while movement perpendicular to that direction would have little influence because it would effect both electrode potentials the same way, increase or decrease.
  • All these movements can be represented by vectors and the mean vector of these is what you're after.
  • For each electrode pair you have measured the positive and negative deflection voltages, the sum of those give you a resulting vector for each electrode pair, and these correspond to the magnitude of the mean vector in each of those directions.
  • If one of these vectors is zero or very small, you know that the mean vector must be perpendicular to that direction, leaving you only one last thing to determine, which way it points.
  • If you have two smallest vectors with the same magnitude, then the mean vector will be on one of the angle bisectors. The info I got has lead I at 0° (to the right), lead II at 60° clockwise rotation, lead III at 120°. If I and III are equal in magnitude (don't need the same sign), then the mean vector can be 150° or -30°, but in those cases lead II will be smallest, so the only possibilities left are +60° or -120°, depending on the sign of the lead II result. That's how I understood it, but all the different electrodes made it a bit confusing. So far only arm and leg electrodes seemed involved?? A link to a site with the terminology or examples could help. More people inclined to have a look if the subject is just a click away instead of having to google first. Hmmm, would there be a correlation between links in question and number of responses... 84.197.178.75 (talk) 19:37, 14 March 2012 (UTC)Reply

All basically correct! Our article on Electrocardiography has some information about vectorial analysis, but I'm not sure if that's sufficient for you. In the normal heart the mean electrical vector is usually around about 60 degrees (lead II), but anywhere between -30 and +90 is considered normal. The mean vector, of course, will be perpendicular to the direction of the smallest vector, and in the direction of the most positive vector. Mattopaedia Say G'Day! 07:10, 16 March 2012 (UTC)Reply

Electricity prices

In this Scientific American article, US Energy Secretary Chu says natural gas "is about 6 cents" per kilowatt hour, implying it is the least expensive source of electricity. However Bloomberg's energy quotes say on-peak electricity costs $19.84-26.50 per megawatt hour, depending on location. Why is that so much less? 75.166.205.227 (talk) 09:44, 14 March 2012 (UTC)Reply

The Bloomberg prices are prices at which energy companies can buy or sell generating capacity in the wholesale commodities markets. An energy company will then add on a mark-up to cover their costs of employing staff, maintaining a distribution network, billing customers etc. plus a profit margin. Our article on electricity pricing says that the average retail price of electricity in the US in 2011 was 11.2 cents per kWh. Gandalf61 (talk) 10:48, 14 March 2012 (UTC)Reply
(Per the description,) The figures given in the Scientific American article are apparently the Levelised energy cost. This is a complicated calculation and the number depends on several assumptions and it's not clear to me what market the estimated break even price is for although it sounds like it depends on what you count in the costs. Also I'm not sure why the OP is making the assumption natural gas is the least expensive source, I believe coal normally is if you don't care about the pollution.Edit: Actually the source says coal is normally more expensive now although I don't think this is ignoring pollution. Nil Einne (talk) 11:21, 14 March 2012 (UTC)Reply
The Bloomberg quote for gas is 2.31$/MMBtu, and 1MMBtu is 293 kWh, so that would be 0.7 cents per kWh. Maybe he was a factor 10 off? Or he quoted the European consumer gas prices, those are around 0.05€ per kWh... 84.197.178.75 (talk) 12:56, 14 March 2012 (UTC)Reply
The article I linked above suggests the figures are accurate. For example the lowest for naturalised gas (advanced combined cycle) is given as $63.1/megawatt-hour. The cost for generating electricity from naturalised gas is obviously going to be a lot higher then just the price of the gas. Nil Einne (talk) 13:24, 14 March 2012 (UTC)Reply
Oops, you're right of course. I was thinking 10% efficiency was way too low for a power plant, I saw your comment but the penny didn't drop then... 84.197.178.75 (talk) 15:27, 14 March 2012 (UTC)Reply
Talk of gas-generated electricity being cheaper than coal is puzzling. In the 1970's baseload coal and nuke were cheap electricity, and natural gas was used low efficiency fast-start peakers, typically 20 megawatt turbine units, which could be placed online instantly to supplement the cheaper fueled generators, when there was a loss of generation or to satisfy a short-duration peak load. The peakers might be 10 or 15 percent of total generation for a large utility. The coal generators were 10 times larger (or more) than the gas turbines, and took hours to bring on line. The fuel cost for gas was over 4 times the fuel cost for other fossil fuels, and 18 times as much as for nuclear. The total cost was over 3 times as much for gas as for other fossil and 8 times as much as nuclear.. Is gas now being used in large base-load units, 300 to 1000 megawatt scale, to generate steam to run turbines, rather than as direct combustion in turbines? Edison (talk) 15:33, 14 March 2012 (UTC)Reply
I think so. This 1060 MW plant built in 2002 is very typical of the new gas plants installed from the late 1990s to present. They are small, quiet, relatively clean except for CO2, and have become cookie cutter easy to build anywhere near a pipeline. 75.166.205.227 (talk) 18:25, 14 March 2012 (UTC)Reply

I still do not understand why a power company would quote a wholesale contract price less than 30% of their levelized price. Even if it was only for other power companies, which I don't see any evidence of, why would they lose so much money when they could simply produce less instead? 75.166.205.227 (talk) 18:25, 14 March 2012 (UTC)Reply

In some areas (for example, in California), energy companies do not have a choice: they must produce enough electricity to meet demand, even if this means operating the business at a loss. Consider reading: California electricity crisis, which occurred in the early parts of the last decade. During this period, deregulation of the energy market allowed companies (like the now infamous Enron) to simply turn the power-stations off if the sale-price was lower than the cost to produce. Citizens didn't like this. To bring short summary to a very long and complicated situation, the citizens of California fired the governor, shut down Enron, and mandated several new government regulations, and several new engineering enhancements to the energy grid. The economics of power distribution are actually very complicated; I recommend to "proceed with caution" any time anyone quotes a "price" without clearly qualifying what they are describing. Nimur (talk) 18:43, 14 March 2012 (UTC)Reply
You should remember what levelised cost is. It tries to take in to account total cost over the lifespan and includes capital expenditure etc. It's likely a big chunk of the cost is sunk. Generating more will increase expenditure e.g. fuel and maintenence and perhaps any pollution etc taxes and may also lower lifespan, but provided your increased revenue is greater then the increased expenditure (i.e. you're increasing profit) then it will still likely make sense to generate more. The fact you're potentially earning less then needed to break even is obviously not a happy picture for your company, but generating less because you're pissed isn't going to help anything, in particular it's not going to help you service your loans (which remember are also part of the levelised cost). It may mean you've screwed up in building the plant although I agree with Nimur, it's complicated and this is a really simplistic analysis (but I still feel it demonstrates why you can't just say it's better if they don't generate more). (Slightly more complicated analysis may consider the risk of glutting the market although realisticly, you're likely only a tiny proportion of the market.) You should also perhaps remember that the costs are (as I understand it at least) based on the assumption of building a plant today (which may suggest it makes no sense to build any more plants, but then we get back to the 'it's complicated part). Nil Einne (talk) 20:13, 14 March 2012 (UTC)Reply
Ah, I see, so they have already recovered (or are recovering) their sunk and amortized costs from other contracts and/or previous sales, so the price for additional energy is only the marginal cost of fuel and operation. Of course that is it, because as regulated utilities their profits are fixed. That's great! ... except for the Jevons paradox implications. 75.166.205.227 (talk) 22:08, 14 March 2012 (UTC)Reply
Natural gas is the cheapest energy source delivered directly to the home, at about 1/3 the cost of electricity per BTU, for those of us lucky enough to have natural gas lines. Sure, our homes explode every now and then, but oh well. StuRat (talk) 22:38, 14 March 2012 (UTC)Reply
 
If you extrapolate these numbers for cumulative global installed wind power capacity, you get this 95% prediction confidence interval.
When the natural gas which is affordable (monetarily and/or environmentally) has been burned up by 1000 megawatt power plants, then what heat source will folks use who now have "safe, clean, affordable" natural gas furnaces? I have always thought (and I was not alone) that natural gas should be the preferential home heating mode, rather than electric resistance heat. Heat pumps are expensive and kick over to electric resistance heat when the outside temperature dips extremely low (unless someone has unlimited funds and puts in a heatpump which extracts heat from the ground). Edison (talk) 00:08, 15 March 2012 (UTC)Reply
I agree that we are rather short-sighted to use up our natural gas reserves to generate electricity. Similarly, I think petroleum should be kept for making plastics, not burned to drive cars. We should find other sources of energy to generate electricity and power our cars, not just to save the environment but also to preserve these precious resources. We will miss them when they are gone. StuRat (talk) 07:42, 15 March 2012 (UTC)Reply
I completely agree, but honestly think there is nothing to worry about. Wind power is growing so quickly and so steadily that it has the tightest prediction confidence intervals I have ever seen in an extrapolation of economics data. Also, there is plenty of it to serve everyone and it's going to get very much less expensive and higher capacity on the same real estate and vast regions of ocean very soon. Npmay (talk) 22:01, 15 March 2012 (UTC)Reply
Who did that extrapolation and what are the assumptions? It appears to be based on an exponential growth model—why not logistic growth? -- BenRG (talk) 00:35, 16 March 2012 (UTC)Reply
I agree a logistic curve would be a better model, but when I tried fitting the sigmoids, they were nearly identical -- within a few percent -- to the exponential model out to 2030, and did not cusp until long enough that the amount of electricity being produced was unrealistic. Npmay (talk) 01:20, 16 March 2012 (UTC)Reply
God, eventually the missionaries of noise will have to travel to the frozen tundra and the Arctic Ocean to prospect for the last pockets of natural sound to ruin with one of their machines. Wnt (talk) 16:13, 18 March 2012 (UTC)Reply

Polyethylene

Is the dimer for polyethene butane? If not, what? Plasmic Physics (talk) 12:25, 14 March 2012 (UTC)Reply

Butene? --Colapeninsula (talk) 12:57, 14 March 2012 (UTC)Reply
Butane is the saturated hydrocarbon C4H10, and cannot be a dimer for polyethene (more corectly known as polyethelene), a saturated hydrocarbon H.(C2H4)n.H. Perhaps you meant "Is the dimer for butane polyethelene?". For the polyethelene with n=2, H.(C2H4)n.H reduces to C4H10, ie it IS butane. But for all n<>2, the ratio of C to H changes, so the answer is still no. To form dimers, you need two indentical molecules combined without discarding any atoms. Keit120.145.166.92 (talk) 13:12, 14 March 2012 (UTC)Reply

The only solution is to that would be cyclobutane? Plasmic Physics (talk) 22:21, 14 March 2012 (UTC)Reply

That is very well not the answer either, the only solution to preserving the elemental ratio is to have a diradical, polyethylene is not a diradical. Plasmic Physics (talk) 22:30, 14 March 2012 (UTC)Reply

@Colapenisula:Butene would require a middle hydrogen atom to migrate to the opposite end of the chain, not likely - the activation energy would be pretty high. Plasmic Physics (talk) 23:51, 14 March 2012 (UTC)Reply

Dimerization to form butene is apparently easy if you have the right catalyst to solve the activation energy problem:) Googling (didn't even need to use specialized chemical-reaction databases) finds many literature examples over the past decade or so, giving various yields and relative amounts of the alkene isomers. The cyclic result is a pretty common example in undergrad courses for effects of orbital-symmetry: is it face-to-face [2+2] like a Diels–Alder reaction, or crossed (Moebius would be allowed whereas D–A is Huckel forbidden/antiaromatic), or is it even a radical two-step process (actviation energy?) or electronically-excited [2+2] (Huckel-allowed) in the presence of UV light? DMacks (talk) 19:29, 17 March 2012 (UTC)Reply

OK, so both forms are allowed. Which one has the lower ground state? What if it was a icosameric polymer? Plasmic Physics (talk) 22:19, 17 March 2012 (UTC)Reply

Dissuading bunnies from eating us out of house and home...

... literally. My daughter's two rabbits, when we let them roam the house, gnaw the woodwork, the furniture, our shoes... Is there some simple means by which we can prevent this? Something nontoxic, say, but unpleasant to their noses that we can spray things with?

Ta

Adambrowne666 (talk) 18:36, 14 March 2012 (UTC)Reply

From some website: The first thing to do is buy your bunny something else to chew on. You can buy bunny safe toys from many online rabbit shops. An untreated wicker basket works well too. They also enjoy chewing on sea grass mats. To deter rabbits from chewing on the naughty things, try putting some double sided sticky tape on the area that is being chewed. Rabbits will not like their whiskers getting stuck on the tape. You can also try putting vinegar in the area too, as rabbits find the smell and taste very very offensive. Bitter substances tend not to deter rabbits as they enjoy eating bitter foods (ever tried eating endive? very bitter.)
Or google "rabbit tabasco" for other ideas .... 84.197.178.75 (talk) 19:59, 14 March 2012 (UTC)Reply
Why would you let bunnies run free indoors ? Don't they crap all over the place ? Not very hygienic. Maybe put them in the tub and rinse their pellets down after an "outing". StuRat (talk) 22:26, 14 March 2012 (UTC)Reply
Fricaseeing might help. Supposedly they taste just like chicken. ←Baseball Bugs What's up, Doc? carrots23:38, 14 March 2012 (UTC)Reply
Rabbits tend to go to the toilet in the same spot and they're pretty easy to litter box train. It's harder, but not impossible to train some rabbits to not chew things, but we never managed to do it with our two dwarf bunnies. For that reason we just don't leave them in the house unsupervised. We do let them run around the house sometimes and they won't go to the toilet on the floor, but we did catch one of them once chewing on the fridge power cable which was the final straw of giving them free reign of the house. Vespine (talk) 23:53, 14 March 2012 (UTC)Reply
As a final point, I do remember reading that some bunny breeds are just more suitable as "house" pets then others. There are plenty of articles on the subject if you google "house rabbit". Vespine (talk) 23:57, 14 March 2012 (UTC)Reply

Thanks, everyone - yeah, they crap everywhere; we're not masters of the household at all - I don't know how they get away with it: they drop dozens of scats all over the house, but if I do one poo in the living room, people look askance! Will try the doublesided tape and other measures; wish me luck!

  Resolved


March 15

feeding plants carbon

What are some ways to feed plants carbon? Could I administer malic acid to CAM plants through the stomata? There's a product out there on the market that supposedly is an artificial carbon source for plants-- what are some possible mechanisms to "help out" plants with carbon fixation? (This is for ornamental plants, where the large-scale boost of organic material is desired.) I can't use ammonium (bi)carbonates because of the ammonium ion's toxicity to fish. Could I administer an organic acid + bicarbonate, or maybe boric acid? 74.65.209.218 (talk) 06:01, 15 March 2012 (UTC)Reply

Sugar works pretty well. Plasmic Physics (talk) 07:09, 15 March 2012 (UTC)Reply
Googling the topic of feeding plants sugar, I doubt this is a beneficial solution. It seems to encourage bacterial, rather than plant, growth. 74.65.209.218 (talk) 08:14, 15 March 2012 (UTC)Reply
What makes you think your plant is deficient in carbon ? Doesn't it get enough from carbon dioxide in the air and/or organic molecules in the soil ? StuRat (talk) 08:17, 15 March 2012 (UTC)Reply
You could always put an animal in with the plants as they produce carbon dioxide, maybe more fish? Or a hamster. SkyMachine (++) 08:29, 15 March 2012 (UTC)Reply
I'm trying to speed up carbon fixation. Furthermore, these are aquatic plants in a fish tank, which seem to grow slowly. 74.65.209.218 (talk) 09:10, 15 March 2012 (UTC)Reply
First, are they growing more slowly than is typical for their species ? Second, what makes you think that carbon is the limiting factor ? Perhaps something else is deficient, such as light. If you don't accurately determine the problem, any "solution" is likely to cause more harm than good. StuRat (talk) 09:59, 15 March 2012 (UTC)Reply
Did your search just focus on sucrose or did you include a variety of sugars. Plasmic Physics (talk) 08:32, 15 March 2012 (UTC)Reply
My search included mentions of glucose. My search shows that glucose is actually an herbicide. 74.65.209.218 (talk) 09:11, 15 March 2012 (UTC)Reply
I suppose if the sugar was colonized by yeast you would produce carbon dioxide. SkyMachine (++) 08:43, 15 March 2012 (UTC)Reply
This doesn't help. I can't have too many microbes proliferating in the tank water. Furthermore, I am not sure if roots are meant to absorb carbon dioxide or even sugar. I am looking at more sophisticated ways of adding carbon. Does administering malic acid to CAM plant stomata speed up carbon fixation? 74.65.209.218 (talk) 09:10, 15 March 2012 (UTC)Reply
You could always use compressed CO2 like you can get at home brew stores or soda stream canisters. Modify a switch to slowly relase CO2 to bubble through the tank. SkyMachine (++) 09:29, 15 March 2012 (UTC)Reply
That's going to produce some carbonic acid in the water and make it more acidic, which might not be good for the fish. StuRat (talk) 09:56, 15 March 2012 (UTC)Reply
Seems to be a mix of factors; light, CO2, fertiliser.
  • plants need light to grow, but the more light they get, the more CO2 and trace elements they will need.
  • CO2 diffusion in water is much slower than in air. There can be a CO2 depleted layer of water around the plants. CO2 injection is one of the techniques used, with a CO2 tank, valves, regulators and controller, measuring the pH to adjust the CO2injection. Seems to be a bit expensive.
  • Trace elements, especially iron it seems, may be lacking. Add some trace element mix for water plants.
air bubblers, biofilters, plants will remove CO2 from the water. Fish add CO2. Yeast generators are low cost way of adding CO2.
Adding CO2, when the lighting is adequate, will increase the oxygen in the water due to more photosynthesis from the plants.
It's all a balancing act it seems, check out some forums like forum.aquatic-gardeners.org for more info. 84.197.178.75 (talk) 11:25, 15 March 2012 (UTC)Reply
What if you combine malic acid with a buffering agent? Plasmic Physics (talk) 11:28, 15 March 2012 (UTC)Reply


Note about carbon fixation: plants take up CO2 via photosynthesis. That's the ONLY way they take up carbon in significant amount. Forget about trying to feed them carbon any other way. Do you want to boost carbon fixation because you want the plants to grow or because you want to reduce the amount of CO2 in the water?? For the first you want to add CO2 and light and trace elements if needed. For reducing CO2 you would add light and more plants and again trace elements if needed. But from what I understand, faster growing plants by CO2 injection will result in more O2 in the water for the fish, and under 30 ppm, the CO2 does not hurt them. 84.197.178.75 (talk) 12:19, 15 March 2012 (UTC)Reply


If this is for aquatic plants, they make commercial CO2 injectors specifically intended to introduce extra carbon dioxide into planted tanks. It's a whole category of products on the specialist sites (e.g. here). These get carbon dioxide from pressurized tanks, available either from a welding supply company or from paintball supply companies. You can also put together a DIY system with a homebrew reactor based on sugar and yeast (you don't put the sugar and yeast in the tank, you put it in a separate tank, and pipe the gas that comes off into the tank. (Search /diy co2 aquarium/ or /diy co2 planted tank/ on Google, and you'll get plenty of results, including many step-by-step instructions. Try also /co2 system for aquarium/). While adding the CO2 will depress the pH a little due to the carbonic acid formed, when the plants take in the carbon dioxide, they'll reverse that process, neutralizing the acidity. And the pH drop can be mitigated by making sure your tank has enough buffering capacity (usually referred to as "KH" in the test kits). If you want your plants to really take off once you start adding CO2, you may want to add some additional aquatic plant fertilizer. Try to avoid using regular plant fertilizer, as depending on formulation, it may produce algae blooms. You'll probably also want to invest in a better water chemistry test kit, as keeping acidity/buffering/nitrogen/phosphate/iron/etc. in balance in a planted tank maintained as such, especially with CO2 injection, is more important than for a tank maintained just for the fish. -- 71.217.13.130 (talk) 16:44, 15 March 2012 (UTC)Reply

I'm trying to find methods cheaper than CO2 injection. Carbon supplements already exist on the market. Specifically, my relative wants me to find an alternative to this product for him, one that he could mass produce. Would feeding plants 3-phosphoglycerate work, or would it just trigger algal blooms?

Also I know from reading the literature (I'm a chemistry student) that plants appear fix the bicarbonate they absorb; presumably if they can absorb fulvic acids in organic fertiliser (in mulch for example) then they could absorb sugars or organic acids. Do the organic acids in fertilisers boost the growth of plants directly or do they simply improve the growth of symbiotic fungi and so forth? 74.65.209.218 (talk) 19:38, 16 March 2012 (UTC)Reply

A quick web search (/Flourish Excel ingredients/) turns up posts claiming that the active ingredient of Flourish Excel is glutaraldehyde (or rather a polymerized version of flutaraldehyde -which is good, as glutaraldehyde by itself is volitile and highly toxic). An MSDS on the Seachem website [2] says it's specifically polycycloglutaracetal, which matches what's written at Glutaraldehyde#Algaecidal_activity. I've seen some forum posts discussing people using aqueous solutions of glutaraldehyde as a DIY replacement for Flourish Excel, but caveat emptor and all that. -- 71.217.13.130 (talk) 03:56, 17 March 2012 (UTC)Reply

Bullet through the brain

I'm under the impression that shooting a bullet through the brain almost always causes instant death. Is this true, and if so, why? Phineas Gage had a huge tamping iron driven through his skull, yet he remained mostly unaffected. Lobotomies remove the entire prefrontal cortex, yet leaves the patient mostly functional. In literature, I routinely read about studies of what happens when this or that area of the brain is lesioned/damaged. Why would a bullet, which is physically small and unlikely to take out a major portion of any brain structure, be so likely to cause death after penetrating the brain? --140.180.5.239 (talk) 06:49, 15 March 2012 (UTC)Reply

As long as the brain stem is intact, there is a possiblity of survival. If you want to not be brain dead, then you'll have to miss a few more sections. In addition, a bullet doesn't always make a clean wound, sometimes (depending on specs) the bullet liquifies tissue around it. There is a youtube video somewhere of what can happen, although demonstrated on an apple. Plasmic Physics (talk) 07:07, 15 March 2012 (UTC)Reply
Curiously, all of the good reviews on gunshot wounds to the brain happen to be in journals my library doesn't have a subscription to. But no, this is certainly not true. Without those reviews, I couldn't come up with many numbers. What I was able to glean from abstracts is that over 2000 American soldiers in Vietnam managed to make it to a hospital alive despite taking a bullet through the brain. As for why a bullet causes so much damage, it's fast and spinning. It doesn't simply poke a hole through the tissue in front of it; rather, a bullet effectively pulls and drags the tissue around it, potentially causing catastrophic trauma. See Zapruder film for a famous example of what that means. If the bullet stops in the brain, which is more likely if it's a hollow point bullet designed to slow down after hitting its target, the brain has to absorb all of that kinetic energy very quickly. Finally, getting shot in the head can cause severe bleeding and can easily send a person into respiratory arrest, none of which will typically happen in the controlled setting of a surgical room. Someguy1221 (talk) 07:09, 15 March 2012 (UTC)Reply
I would guess that most deep penetrating brain injuries result in death... but there are the rare exceptions to that and bullets are no exception. The shooting of Gabrielle Giffords is a salient recent example. In few of these cases, whether that shooting or the case of Gage, or in lobotomies, is there no damage. In fact, the damage is often quite profound. What's remarkable is that the victim doesn't die immediately.
What makes a bullet different from many of the other kinds of head injuries, Gage's probably included, is the sheer velocity of a bullet. A low velocity bullet, say a .45 ACP caliber, moves at almost 1,000 feet per second (680 miles per hour/1000 km/h). A bullet from a modern rifle (military or hunting) is about 3x that speed.
Take a look at hydrostatic shock and stopping power. Hydrostatic shock describes why "remote", i.e. not directly to the brain, bullet impacts can incapacitate almost instantly. You don't have to be a scientist to extrapolate those findings to what direct brain injuries do. Also look at terminal ballistics (not a great article, a lot of it looks like one person's production, but should give you some context). The short answer is that while a bullet is small, the shockwave it creates as it enters an object, particularly an object with features like tissue, create temporary disruptions much larger than the projectile itself. I actually doubt there's too much tumbling in brain tissue, although I could be very wrong about that. But there are a lot of very morbid journal articles (the above articles reference some of them) that talk about how occasionally supposedly more "humane" bullets have counter intuitive effects. (sidenote: the Geneva ConventionHague Convention requires militaries use full metal jacketed bullets, however there's some debate over the differences between hollow point and full metal jacketed rounds). The brain is particularly sensitive in this respect, which is why these injuries are usually fatal. Shadowjams (talk) 07:26, 15 March 2012 (UTC)Reply
A few points:
1) A modern rifled weapon, unlike an ancient one, has a spiral groove inside it, designed to spin the bullet to keep it from tumbling. This reduces air resistance and makes it go faster, farther, and straighter. If it continues like this through the brain, it may cause less damage than a tumbling bullet.
2) A slower bullet may actually cause more damage, by ricocheting around in the brain, rather than just going in and out.
3) As mentioned above, there are hollow-point bullets and other types, designed to rip apart on impact, causing much more damage. Such bullets are often illegal.
4) If you survive the initial trauma of the bullet, then infection becomes a major concern. StuRat (talk) 07:37, 15 March 2012 (UTC)Reply
Well you're wrong on pretty much every point StuRat. All modern handguns are by definition rifled (this is a pretty elementary point to anyone with a cursory familiarity with firearms)... smoothbore guns are generally considered shotguns or muskets (if black powder)... ever wonder why a handgun that shoots .410 shotgun shells is legals? answer is... because it has a rifled barrel. Btw, none of that has anythign to do with my point... if you could get a musketball to do 1000 fps it'd do substantially more damage too. Again, "straight through" if it was slow that might be true, but the high velocity of a bullet has affects on tissue that are disproportionate. As for point 2, there's a huge debate over "energy delivered" and "stopping power" and "hydrostatic shock" and other similar concepts. Many modern militaries have shifted to high velocity, smaller rounds. I doubt there's much chance for "richochet" inside the skull with most modern rounds. I've heard that mafia tale that a .22 was used for assassinations for this reason, but I get a strong suspicion that's urban legend. Your point 3 is again subject to the intense debate about the effectiveness of particular round types Point 4, i doubt that's true with modern medicine. I think swelling is probably the greater risk. Shadowjams (talk) 09:07, 15 March 2012 (UTC)Reply
I've modified my point 1 accordingly, but don't think you've made your case that my points 2-4 are wrong. On point 4, swelling may also be a major concern, but that doesn't mean that infection isn't. StuRat (talk) 09:51, 15 March 2012 (UTC)Reply
A treatment of last resort for severe cerebral oedema (swelling of the brain) is a decompressive craniectomy, which is basically cutting a hole in the skull. Since the bullet has already done that, swelling may well not be that serious an issue. --Tango (talk) 21:01, 16 March 2012 (UTC)Reply
I've noticed that the old method of immediately sealing any wound has been replaced by a newer method of leaving it open to allow it to drain, so this would help prevent swelling, but does pose a challenge as far as preventing infection. StuRat (talk) 23:25, 16 March 2012 (UTC)Reply
This looks like another great opportunity to mention Mike the Headless Chicken on the science desk.--Shantavira|feed me 08:55, 15 March 2012 (UTC)Reply
Or even Roland the Headless Thompson Gunner. --Jayron32 14:06, 15 March 2012 (UTC)Reply
Or Carlos Rodriguez. --Itinerant1 (talk) 18:41, 15 March 2012 (UTC)Reply
You may also be interested in this man, who lost 43% of his brain in the Falklands War and is still with us: Robert Lawrence (British Army officer). --TammyMoet (talk) 09:35, 15 March 2012 (UTC)Reply
The amazing part is that he's not only with us, but he managed to lead an active life and even to get married after the injury. I'd have expected him to be a vegetable (or at least severely mentally disabled.) --Itinerant1 (talk) 02:30, 16 March 2012 (UTC)Reply
"Am I serious about her ? I have half a mind to marry that girl !". :-) StuRat (talk) 23:27, 16 March 2012 (UTC) Reply

Car radio

I asked this question a few years ago on another forum, but didn't get any replies I felt answered it. I used to own a car where the car radio would sometimes go quiet. A quick push on the front panel of the radio would restore the sound level. So far so straightforward. However, I noticed that driving under high-voltage power lines would also sometimes restore the sound level. Any ideas why this would happen? 86.134.43.228 (talk) 20:00, 15 March 2012 (UTC)Reply

Metal contacts can oxidize, and such an oxide layer can be an insulator, or have semiconductor properties. Pushing the radio may shift the contacts a bit, breaking through the oxide layer. A high voltage can also break through a thin semiconductor layer, and once it does it causes avalanche breakdown: the electrons are accelerated, collide with atoms which get ionized by this creating a chain-reaction. That's how zener diodes above 5.5 volt work, see avalanche diode and avalanche breakdown. Usually there's an hysteresis effect, meaning that the voltage at which the conduction stops will be lower than the one where it started. That could be an explanation, with the power lines inducing a higher b-voltage over the contacts, enough to break through. Also the micro-weld phenomenon seen with coherers could be involved. In general, it would be some thin insulating layer that can withstand the 12 volts over the contacts but breaks down at a higher potential. That's my best guess. 84.197.178.75 (talk) 21:25, 15 March 2012 (UTC)Reply
The following explanations are much much more likely: The strength of radio waves often changes dramatically near and under high voltage power lines. Usually the signal strength falls under power lines but it also can increase. AM Radios incorpoarte and automatic volume control system (AVC)(the more correct term is automatic gain control) so you don't notice the change as tune form one station to another, move around, go under power lines etc etc. My guess is that there is a bad solder joint in an area affecting the AVC. When the radio passes under the power lines perhaps the change in signal causes a sufficient change in voltage in the AVC circuit to overcome the oxide layer in the faulty solder joint. FM radios often incorporate a Mute circuit, as without it you get a full volume blast of noise when tuning between stations. Maybe the mute circuit is affected by a crook solder joint, making it mute at too high a signal strength, and the change in signal when passing under power lines is overcoming it. Keit124.178.61.156 (talk) 00:40, 16 March 2012 (UTC)Reply


The question didn't indicate that it specifically affected weaker stations, and it's an unlikely defect for an AVC circuit imo, since these reduce the gain of strong signals, not boost a weak one. A faulty solder joint affecting the mute/squelch circuit is a possibility,
fixing a bad connection on a PCB with a quick push would only work reliably if that put significant force on the PCB. Pushing the old fashion volume or tuner knob would be most effective (good way to damage it too). So yes, it can be caused by a bad solder joint on the board. But I'm thinking that in that case, it would often be triggered by touching the controls, changing volume or station. The way it's described, it didn't sound like something that would happen several times a day. And it's easily fixed. Makes me think, it's likely a spring leaf type connector rather than a "force-less" contact. If there's any movement between the contacts due to vibrations, they will cause rapid fretting corrosion of the contacts. You get a buildup of oxide material because the oxide layer gets scratched, exposing fresh metal, increasing the layer thickness and accumulating metal and oxide particles. A "normal oxide layer on metal conductors will withstand about 0.2 volt. Typical open-circuit voltage somewhere in a radio would be from >1 to several volt I think, so you need a big oxide layer. A loose connection with that much build-up would behave more like a typical coherer, with the high resistance state easily triggered by vibrations, more so than when the contact surface is under pressure. Vibration won't separate the contacts, it takes time to build up the oxide layer and a bit of random chance to get to the high resistance state. And it won't be very stable. Movement or RF noise could return it to the conducting state.
Power lines develop faults with age, The insulators crack or get covered with dirt causing leak currents that emit high amplitude RF noise. And power companies only fix those faults if they have to; to quote "An Important Rule" for technicians resolving power-line RF noise given in an industry publication:
"Perhaps the most difficult hurdle to overcome in this process is to ignore those noises not affecting the customer's equipment. An important rule for efficient and economic RFI troubleshooting is to locate and repair only the source causing the complaint." (Transmission and Distribution World, sept 2004)
But I'm just speculating; anything is possible ;-) 84.197.178.75 (talk) 18:49, 16 March 2012 (UTC)Reply
You are right - almost anything is possible. It is possible with some types of AVC circuits to go faulty so as to cut off an IF stage rather than leave it full on. Some signal still gets thru due to stray capacitance. Not enough to keep the owner happy, but maybe enough under high signal conditions to joggle the fault. I used to do car radio, stereo, and TV repair. Two things I learnt very solidly:- (1) Intermitant faults are the sneakiest and trickiest things. Prodding in one place can make it come and go, but the dry joint is somewhere else. (2) if a customer says their set is faulty, you can (usually) assume it IS faulty, but don't rely on what they say about it. Non-technical people have funny theories and often leave out or confuse vital information. Like saying their TV is crook only on Channel 2, when in fact it is faulty on all channels but they only watch Channel 2. Keit120.145.40.231 (talk) 03:15, 17 March 2012 (UTC)Reply

Excellent answers, thanks for the replies everyone 86.134.43.228 (talk) 20:10, 18 March 2012 (UTC)Reply

Hydrogen scattering length density?

I was discussing SANS this morning with another grad student, and realized I don't actually know the answer to this question myself: Does anyone have a simple answer for why hydrogen has a negative scattering length density? The article says "neutrons deflected from hydrogen are 180° out of phase relative to those deflected by the other elements", but that's purely phenomenological. I know it's quantum mechanical in origin, but beyond that I don't have a really good grasp of why this the case. It's slightly counterintuitive to me that something akin to a scattering cross-section would be negative. I've also looked over the article on neutron cross sections, but it in turn just references back the the scattering length density article. Any thoughts? (+)H3N-Protein\Chemist-CO2(-) 21:55, 15 March 2012 (UTC)Reply

*bump* (+)H3N-Protein\Chemist-CO2(-) 14:22, 16 March 2012 (UTC)Reply
??? (+)H3N-Protein\Chemist-CO2(-) 22:42, 18 March 2012 (UTC)Reply

To what extent is a fermion's position part of its quantum state for the purposes of Pauli exclusion?

Recently this controversy regarding a Brian Cox (physicist) lecture was brought to my attention. Although it is only touched on briefly by the many people objecting to May's interpretation of the Pauli exclusion principle, it is generally agreed that position is part of an electron's quantum state. But to what extent is that so? For example, two electrons orbiting the same helium nucleus are forced into different spins because they are close enough together, and similar things cause Pauli exclusion in much larger molecular orbitals. But how far apart do two electrons need to be before they can otherwise both exist in the same quantum state? Npmay (talk) 22:07, 15 March 2012 (UTC)Reply

To do this correctly, you need to solve the wave function for interacting electrons, which is very hard. (Why is it hard? Because the potential energy is not constant - much like any non-quantum n-body problem - only, also add the complexity of quantized states.) If you take your ordinary quantum mechanics textbook, they'll walk through the solutions for a single electron around a highly-ionized atomic nucleus; and usually, they'll assume the potential energy function for a stationary, electrostatic potential well. But if you have multiple moving charged particles, you can't do this; the math becomes quite difficult. If you'd actually like to work it out, I can recommend several good texts to walk you through the math - but let's be honest: physics students (who are very smart people) usually spend something like a full year working the basic mathematics that describes the quantum-mechanically correct electron orbit, during the course of a two or three semester advanced physics class, and still do not even solve for two electrons. So, the probability that we can summarize this quickly or easily is very low.
If you're looking for a one-line answer, though, let's phrase it this way: "The farther apart the electrons, the greater the probability that they are non-interacting." Quantized states notwithstanding, electron-electron interactions are modeled by a Coulomb potential, whose strength falls off as the inverse of distance. Nimur (talk) 23:00, 15 March 2012 (UTC)Reply
So, the extent to which the electrons interact, which is proportional to the strength of the electromagnetic force in accordance with the inverse square law, determines whether they are in the same position for the purposes of being in the same quantum state? That would make some sense. It would also resolve the controversy in that distant electrons only have a tiny but nonzero probability of being subject to Pauli exclusion. Is that good enough to avoid the math details? Npmay (talk) 23:29, 15 March 2012 (UTC)Reply
Electromagnetic interaction doesn't really have anything to do with it—in everything that I wrote below, it's irrelevant whether the fermions are electrons or neutrinos or (hypothetical) particles that don't interact at all. -- BenRG (talk) 23:59, 15 March 2012 (UTC)Reply
The key point is that there's one wave function for the whole system, not one per particle. In a system of two spinless identical fermions confined to a line segment, you can think of the wave function as defined on a square whose corners are "both fermions at the far left", "fermion A at the far left and fermion B at the far right", "both fermions at the far right", and "fermion A at the far right and fermion B at the far left". (In fact it's not fair to give the particles labels since they're indistinguishable, but I can ignore that here, so I will.) The exclusion principle says that the wave function is zero at all points that correspond to the fermions being in the same place, which in this case is the diagonal line from "both fermions at the left" to "both fermions at the right". Since the wave function is continuous, it also approaches zero as you approach that diagonal, but there's no particular bound on how large it can be except exactly on the diagonal. The exclusion principle doesn't make any difference when the wave function is zero (or nearly zero) near the diagonal—in other words, when there's no (significant) chance that the fermions are near each other.
For spin ½ particles (like electrons) you can use four copies of the square, one for "both particles spin-up" and so on. The diagonals in the two squares where the particles have the same spin are zero, but the diagonals in the two squares where they have different spins don't have to be zero.
Regarding Cox's lecture, see WP:Reference desk/Archives/Science/2011 December 18#Pauli exclusion principle and speed of light. His words can be interpreted in various ways, but basically he was just wrong. I mostly agree with Sean Carroll's blog post, but even he seems to believe that every quantum object is spread out over the entire universe, an idea which I mocked in my last post to that old Ref Desk thread. -- BenRG (talk) 23:59, 15 March 2012 (UTC)Reply
How do you decide which particles to include in the "complete system" wave-function? I own some beachfront electrons in Tucson and I feel that their interactions should be included in the wave-function for your two-particle system. Clearly, there must be some sanity in deciding when a particle is "far enough away" that it no longer matters. If this criteria isn't based on the magnitude of potential-energy function of the interaction, (i.e., electrostatic potential, for an electron-electron interaction), then what else would it be? Nimur (talk) 00:11, 16 March 2012 (UTC)Reply
True, you need some criterion to separate system from environment. But electromagnetism has nothing to do with the Pauli exclusion principle, so I don't think it's relevant here. I had two particles in my system because one wouldn't be enough and three would be an unnecessary complication. The two particles are isolated from all outside influence because it's my thought-experiment and I say they are. Electromagnetism is relevant if you're specifically talking about atomic orbitals, but that's complicated enough (as you said) that I couldn't have given anything like the answer I did. -- BenRG (talk) 00:54, 16 March 2012 (UTC)Reply
That doesn't make sense to me. Two adjacent helium atoms have between them two pairs of electrons, each pair of which is in the same quantum state except for its position. If their electromagnetic interaction determines whether they are in the same quantum position as well when they are near, then what determines whether they are in the same quantum position as well when they are further away? Npmay (talk) 01:11, 16 March 2012 (UTC)Reply
Now you're getting into some of the really messy Schrodinger's cat territory, and you've basically asked the vital question: The two electrons, two protons, and two neutrons around a single helium atom represent a single "system" which can be analyzed by a single "wave function" which describes, among other things, the relationship between the two electrons around that atom. This requires quantum mechanics to do accurately. Once you introduce the idea of "two adjacent helium atoms" now you really need to define "adjacent". If the two atoms interact meaningfully, then what you have is essentially a helium molecule of some sort, which is dealt with quantum-mechanically by molecular orbital theory, and the math is identical in spirit to the math to calculate the orbitals around a single helium atom, excepting that it is more complex, as you have 4 electrons and 2 nucleii to deal with now. If you're dealing with two atoms sitting in a box together, occasionally colliding; well now you're into that fuzzy "Schrodinger's cat" area; which is that QM has a real problem in describing with behaviors of particles interacting classically. Which is not to say that it cannot/is not done. Its just that, at some level, the quantum mechanical solution to a problem and the classical mechanics solution to a problem converge, so there's no need to go through the exhaustive QM mathematics, which is almost impossible, and instead you can just use the Newtonian math to solve it. Two helium atoms bouncing around a box can basically be described in Newtonian terms and get the same result as using QM terms, so there's no need to do the messy bit... --Jayron32 13:20, 16 March 2012 (UTC)Reply
Thanks for taking the time for that clear explanation. If the Pauli exclusion principle is what makes two helium atoms bounce off each other instead of pass through unaffected, then perhaps the thermodynamic gas compression information inherent in Boyle's law explains how close is close enough to be more in the same system than not. Npmay (talk) 23:05, 16 March 2012 (UTC)Reply
Real gas#Models and Equation of state are certainly complicated and indeterminate enough to fit the bill. Npmay (talk) 11:10, 17 March 2012 (UTC)Reply

March 16

Meteorology question

Reading through the Dodge City, Kansas National Weather Service forecast discussion today I came upon something that somewhat confuses me (not something that happens often being a meteorology student). It says (I apologize in advance for the all caps, but that's what NWS products use) "FRIDAY EVENING COULD BRING MORE WIDELY SCATTERED CONVECTION HOWEVER AS THE LEADING EDGE OF A LEFT FRONT QUADRANT JET MAY PRODUCE A THERMALLY INDIRECT VERTICAL CIRCULATION NEAR THE OKLAHOMA LINE IN THE EVENING."[3] The part that confuses me is the part about "the leading edge of a left front quadrant jet may produce a thermally indirect vertical circulation", as this is not a concept I have come across before. I also seem to have seen something related to this on the evening TV weather forecast here (2:30 into the video). First, what does the forecast discussion part mean? Second, what is the meteorology behind it (i.e. how does the part that's confusing me cause the convection mentioned in the first part)? Thanks in advance, Ks0stm (TCGE) 00:09, 16 March 2012 (UTC)Reply

I am not sure, but I think "jet" here refers to a front from the jet stream mixed in vertically from downward convection. Npmay (talk) 01:15, 16 March 2012 (UTC)Reply
They appear to have dropped the word streak. See here and here and here. CambridgeBayWeather (talk) 17:17, 16 March 2012 (UTC)Reply

Can a black hole also be, or contain, a neutron star?

If so, would that be ascertainable? Aside from a certain mass range, what else if anything might give evidence for it? Thanks, Rich Peterson198.189.194.129 (talk) 00:33, 16 March 2012 (UTC)Reply

Sort of. Many black holes would have been neutron stars if they were less massive. You can not ascertain anything about the contents of a black hole directly, but you can infer quite a bit about its mass and former composition from the remnants of its formation. As for the matter in a black hole which was there upon its formation, most of it is in a frame of reference where it is a very hot and compressed quark-gluon plasma I believe, but I'm not sure, and nobody really knows what the physical state of a singularity is. Everything that falls into the black hole (even a moment) after its formation is, in its own frame of reference, trapped in a state of being continually stretched and heated. Npmay (talk) 01:07, 16 March 2012 (UTC)Reply
I remember reading that large black holes could exist without being very dense. It was a popular science magazine.(not Popular Science)198.189.194.129 (talk) 01:10, 16 March 2012 (UTC)Reply
Supermassive black holes under a string-theoretical interpretation can be less dense than ordinary matter. There is a discussion of this in Fuzzball_(string_theory)#Physical_characteristics. Npmay (talk) 01:13, 16 March 2012 (UTC)Reply
Neutron stars often spin fast, could that give the black hole containing it an angular momentum that we could observe? Or could "Hawking radiation" be affected by the nature of the stuff inside? Thanks, Richard Peterson198.189.194.129 (talk) 01:21, 16 March 2012 (UTC)Reply
Yes, the collapse of a spinning neutron star would produce a rotating black hole, and we could in theory, if we got close enough, measure its rotation by frame dragging or the Penrose process. Smurrayinchester 10:09, 16 March 2012 (UTC)Reply
Just to clarify, a neutron star cannot be in a black hole and still be considered a neutron star. Once it collapses or falls past the event horizon, the material will inevitably fall into the singularity within finite time. Hawking radiation is not affected by the composition of the black hole. Goodbye Galaxy (talk) 18:28, 16 March 2012 (UTC)Reply

As for the density, the claim that black holes aren't very dense is based on a measure of average density, with the region contained by the event horizon considered the volume. Not only can we not see past the event horizon, we can't sense what's beyond it through any means. That also means the gravity field of the black hole beyond the event horizon is the same no matter the internal distribution of mass. As for whether you'd see evidence in the hawking radiation, I have no idea, but physicists can't seem to agree on how you'd "read" the radiation anyway. Someguy1221 (talk) 01:26, 16 March 2012 (UTC)Reply

Could cold dark matter have accumulated in orange dwarf stars?

Thanks again. Richard Peterson198.189.194.129 (talk) 03:16, 16 March 2012 (UTC)Reply

I don't have an answer for you, but why do you specifically pick that one kind of star? Someguy1221 (talk) 03:21, 16 March 2012 (UTC)Reply
If your dark matter does not inter-act with normal mater or itself, then any that falls into a star should just come out the other side and not stop. So it would be difficult to accumulate, as the dark matter would hve to lose momentum to stay in the star. Graeme Bartlett (talk) 04:56, 16 March 2012 (UTC)Reply
Dark matter is still under the influence of gravitation, so why wouldn't it accumulate in a gravitational well? SkyMachine (++) 08:28, 16 March 2012 (UTC)Reply
As Graeme said, it needs to lose momentum and energy in order to accumulate. Ordinary matter does that via radiation or collisions. Dark matter particles do not radiate and they do not (or at best weakly) interact with each other, so that is not possible. Dark matter requires complicated collective processes that redistribute the energy and allow it to accumulate, so-called violent relaxation (that redirect is a bit useless...). And that (as far as we know) only works on larger scales, say small galaxies and up. --Wrongfilter (talk) 09:04, 16 March 2012 (UTC)Reply
(ec)Because there's nothing to stop it when it gets there. The most you'd get is dark matter orbiting the well. Now, these are "weakly interacting", rather than "non-interacting", so they will inevitably collide with something given infinite time. Someguy1221 (talk) 09:06, 16 March 2012 (UTC)Reply
Is this the same situation with regard to black holes? Would the dark matter only orbit around or pass by rather than being trapped? SkyMachine (++) 09:22, 16 March 2012 (UTC)Reply
No. If a dark matter particle should pass the event horizon of the black hole, it is trapped, just like everything else. Someguy1221 (talk) 09:39, 16 March 2012 (UTC)Reply
I asked about orange stars because I've read they can be very old, and have had time to accumulate the stuff. I was thinking time to accumulate is an important factor, because once it's there, it won't readily leave, even in a nova? Perhaps an even better sort of object for me to inquire about would be a very ancient white dwarf, which could be just as old as an orange star, but would have stronger gravity...It does seem to me it could be orbiting, but orbiting far inside the star, for a long time, then, if and after any interactions, probably drop to a lower orbit inside the star?--Rich Peterson198.189.194.129 (talk) 17:41, 16 March 2012 (UTC)Reply
What's the evidence that dark matter doesn't interact with itself? Wnt (talk) 23:54, 16 March 2012 (UTC)Reply
It could, it very well could. But in the simplest models that allow dark matter to hardly ever interact with normal matter, it also hardly ever interacts with itself. Someguy1221 (talk) 00:41, 17 March 2012 (UTC)Reply
No one has mentioned it, but the assumption that weakly interacting massive particles might accumulate in stars has been the basis for a variety of experiments. For example, if they accumulate significantly in the sun, and if they can self-annihilate, then one signature could be a excess of very high energy neutrinos coming from the sun. Super Kamiokande and similar neutrino telescopes have looked for such signals, though so far we don't yet have a definitive detection. The possibility of such dark matter accumulating in stars has been studied in a great deal of detail, though what one expects depends on the properties that are assumed for the unseen dark matter. Dragons flight (talk) 02:39, 17 March 2012 (UTC)Reply
I am skeptical that dark matter is weakly interacting massive particles at all, because there seems to be no actual evidence for their existence, there is no theory of supermassive black hole formation which does not involve aggregation of smaller primordial black holes over time, none of the gravitational microlensing or wide binary star orbit studies have ruled out massive compact halo object dark matter more than a few hundred stellar masses on average, and a relatively small fluctuation in the rate of spacetime expansion between the inflationary epoch and nucleosynthesis would allow for the additional baryons necessary. Furthermore, all particle dark matter theories are unable to explain the cuspy halo problem and the dwarf galaxy problem. Moreover, dozens of intermediate mass black holes have been confirmed in the past couple years, up from two which were known prior. This is a minority view not held by those who stand to gain research grants from the construction of particle dark matter detectors, but I predict in a few years the black holes will prevail over particles. There is a full account of these issues at Talk:Dark matter#Draft table. And in answer to the original question, if an intermediate mass black hole collided or entered a close enough orbit with a dwarf star, it would siphon its matter, producing x-rays in the accretion disk over a time period depending on how direct the collision happened to be. Npmay (talk) 04:51, 17 March 2012 (UTC)Reply

Is ac supply used for driver circuits?

usually driver circuits are fed with dc ,but my project is designed using ac ,could any one say why ac is implemented? — Preceding unsigned comment added by Ishusri (talkcontribs) 07:26, 16 March 2012 (UTC)Reply

This question cannot be answered as insufficient information has been given. Driver of what? What kind of driver? What do you mean by "fed" - power? signal? What is the project about? Keit120.145.44.170 (talk) 10:08, 16 March 2012 (UTC)Reply
My guess is this is some sort of hobbyist project and the OP is thinking of a power supply module (probably a constant current one although perhaps a constant voltage one) functioning as an LED driver or similar. And the OP is saying many such commercial or hobbyist modules are designed for DC input (which from my limited experience is often true baring ones designed for mains voltage). And the OP wants to know if you can design one suitable for AC input. There's of course no reason you can't design a module that can take AC input and some modules are in fact suitable for AC input (as I said mains voltage ones are an obvious example). Just add some sort of rectification to a DC design is probably the simplest option. You will of course need to consider whether your design needs smoothing etc. However if you're working with mains voltage or other AC input above extra-low voltage, you should make sure you know what you are doing for safety reasons, which you very likely don't if you didn't think of rectification. Nil Einne (talk) 12:41, 16 March 2012 (UTC)Reply
I agree. But the OP could be on about lots of other things - for example he/she might be a student doing some sort of mechatronics course, and his/her class has had a lecture on stepper motor drivers, which normally get powered from DC. Sometimes lecturers throw in a project or assignment question about using a standard stepper motor driver integrated circuit to control a multi-phase AC motor instead of a stepper motor. And the answer is usually yes it can, if you connect it up right and program it right - but to do it you need real understanding, not just the ability to regurgitate the text book & copy datasheet circuits. Homework in other words - if so show you made an effort first. The OP's english is inconsistent - did he mean "could any one say[explain] how ac is implemented?" - or did he mean "could any one say why DC is normally implemented?" Keit60.230.199.158 (talk) 15:42, 16 March 2012 (UTC)Reply
The reason I felt a school or university project was unlikely is it would seem surprising they would be assigned such a project with so little knowledge of basic electronics as to not think of the possibility of rectification. But you're right, another possibly more likely possibility is that it wasn't a matter that they didn't think of rectification, but rather they want to know why their project uses AC. (I somehow misread their last sentence as being a question of whether it's possible to use AC but rereading it it sounds more like a question of why AC is used.) Nil Einne (talk) 17:34, 17 March 2012 (UTC)Reply
Please don't advise connecting a home project to the mains to a user who may well not understand what they are doing. SpinningSpark 16:43, 17 March 2012 (UTC)Reply
Actually I clearly stated they should not do so if they don't understand what they are doing, which as I also said, they almost definitely do not do if they needed to ask about it. (I also acknowledged the existance of common products which do use mains voltage as examples since they are from my experience the most common examples by far. I didn't see this as a problem since amongst other things it doesn't particularly sound like the OP is interested in using commercial modules and if they were, since OP hadn't already found them, I thought it unlikely my acknowledging their existance would prompt them to.) But do remember it's easily possible they are dealing with AC voltage which is generally consider safe (below 24V) so there's no reason not to answer the general question (although as I noted above, I may have misinterpeted the question anyway) about the possibility of using AC voltage just because one of the more likely AC voltages is mains voltage. Particularly if a clear cut warning is provided that they should not deal with any dangerous voltages without knowing what they are doing. (If the OP had suggested they were dealing with mains voltage or other dangerous voltages then it may be a good idea just to not answer even the general question but I think it's difficult to make that case here.) Nil Einne (talk) 17:41, 17 March 2012 (UTC)Reply

Milk

There were some creamy deposits on the inside of my milk carton, that left solid clumps in my milk. The milk I bought was whole pasteurised and organic. The milk smelt ok and tasted ok but I was sick, what could these deposits have been? — Preceding unsigned comment added by 92.8.72.150 (talk) 09:49, 16 March 2012 (UTC)Reply

That would be milk fat. Most likely the milk was frozen, which undoes the homogenization process which previously mixed the cream into the milk. The cream naturally "rises to the top", hence that expression, but freezing tends to make the cream stick to the outside of the container.
Separated milk is not dangerous, and this is how people had their milk for most of human history. However, the milk may tastes too thin without the fat mixed in, since you're used to it that way. I assume by "I was sick" you mean you found the substance disgusting, not that you literally were sick.
As far as preventing this, is it possible it froze in your refrigerator ? If so, you may need to move it to a different part of the refrigerator or turn the temp up a bit. If it was frozen some time before you bought it, then you might want to buy a different brand or from a different store. Another option is to buy skim milk (nonfat milk), which doesn't contain enough fat to clump up. Of course, if you're not used to it, that stuff tastes like water. StuRat (talk) 10:04, 16 March 2012 (UTC)Reply
I'm not sure the OP is saying the milk made them sick. I think they're saying that they were already sick, and so their ability to smell or taste might have been hindered (as often happens when one has a cold, for example). --Mr.98 (talk) 13:33, 16 March 2012 (UTC)Reply
I interpreted "The milk smelt ok and tasted ok but I was sick…" to mean that the milk made the person sick, or caused some kind of sickness. I think some clarification is in order, as Mr.98's understanding of that wording makes sense too. Bus stop (talk) 13:45, 16 March 2012 (UTC)Reply
The IP is from the UK so they could mean that the milk made them vomit. See here. CambridgeBayWeather (talk) 17:11, 16 March 2012 (UTC)Reply

Now, I thought that "but I was sick" meant that the op was mentally ill and lacked the mental capacity to discern if the milk was good or not. I have a good idea : maybe you could ASK for clarification. — Preceding unsigned comment added by 165.212.189.187 (talk) 17:31, 16 March 2012 (UTC)Reply

google satellite

Where do the 1930 satellite images on Google Earth come from? The satellite article says the first satellite was in 1957 109.162.115.155 (talk) 18:35, 16 March 2012 (UTC)Reply

Which images? Juliancolton (talk) 18:44, 16 March 2012 (UTC)Reply
They're aerial not satellite. Google Earth already uses lots of modern USGS aerial photos, so it's never just been satellite imagery only. 87.113.82.247 (talk) 18:50, 16 March 2012 (UTC)Reply
(yes, despite them labelling the overhead imagery button "satellite"). 87.113.82.247 (talk) 18:52, 16 March 2012 (UTC)Reply

quick chemistry question

I have the mass (g) and volume (mL) of something and I need to find M (molarity) and mol...I can't figure this out so do you know if I'm missing something? — Preceding unsigned comment added by 142.132.6.24 (talk) 19:18, 16 March 2012 (UTC)Reply

Try molecular weight or molar mass. 74.65.209.218 (talk) 19:47, 16 March 2012 (UTC)Reply

OP here. How about just using grams and mL to get moles? Is there a way to do that? — Preceding unsigned comment added by 142.132.70.14 (talk) 01:11, 17 March 2012 (UTC)Reply

You need the know the molar mass. A mole of iron is a lot heavier than a mole of hydrogen. Someguy1221 (talk) 02:00, 17 March 2012 (UTC)Reply


What is "something"? The information you give is not enough. So if somebody told you, here is something that weighs X and has volume Y, he hasn't given you enough info. If he said, here's 120ml solution of 10g Na3O2H7C14X11P4Be2 in an unknown liquid, then you can calculate molarity. 84.197.178.75 (talk) 14:33, 17 March 2012 (UTC)Reply
Or if it's a gas at known pressure and temperature, then the mass and volume are sufficient. 22.4 liters = 1 mole at STP 84.197.178.75 (talk) 14:38, 17 March 2012 (UTC)Reply

need a non-phosphate buffer system that yields a pH of near-neutral (7 +/- 0.4)

I can't use phosphate buffers for organic acids because they encourage algal blooms. What system is optimal? 74.65.209.218 (talk) 19:31, 16 March 2012 (UTC)Reply

What exactly is the application of this? It's pretty easy to just search for lists of different buffers. For example, you can read a list of some buffers used in microscopy here [4]. What's best really depends on the application though. Buddy431 (talk) 21:27, 16 March 2012 (UTC)Reply
He's trying to help his aquarium plants grow without harming his fish: Wikipedia:Reference_desk/Science#feeding_plants_carbon. StuRat (talk) 22:08, 16 March 2012 (UTC)Reply
Will 200 mM sodium acetate/methanol work? Npmay (talk) 23:08, 16 March 2012 (UTC)Reply
Is that OK for fish ? StuRat (talk) 23:20, 16 March 2012 (UTC)Reply
Oh, no, it is not! Sorry I missed that this was for an aquarium. Methanol and sodium acetate are likely to kill and season fish and plants, respectively. Npmay (talk) 05:02, 17 March 2012 (UTC)Reply
You really can't pick random laboratory buffering systems and expect them to work in an aquarium, at least not on a stable, long-term basis. If you're looking to buffer an aquarium, you really only have two choices, and it's not really a "choice", as which one you use is determined by the type of tank you're keeping. The first option is bicarbonate/carbonate buffering, which basically amounts to throwing limestone chips into the tank (the term for aquarium buffering capacity, KH, derives from the German for "carbonate hardness"). The equillibria a little more complex than an intro chem titration, due to the multiple species, multiple pKas and precipitation effects, so even though the pKas might not line up, the buffering tends to work out, especially in a CO2 injected tank. The one drawback is that dissolving limestone increases your GH (overall hardness), which doesn't work so well for soft water tanks (note that soft water is not the same as softened water - don't use water that's been through a water softener for aquaria, due to the salt content). If you want to maintain a soft water tank, you need to use humic acid to do your buffering, or as the aquarium wonks call it, "blackwater" (because it tints the water). You can either make it yourself by extracting peat moss, or you can buy "blackwater extract" from well-stocked aquarium supply stores (e.g. [5]). From your call for a pH of 7.0, it looks like you might be trying to maintain a soft water tank - be sure to test the GH of your source water if you go down that path. (Note that commercial companies also sell buffering agents which contain secret ingredients which may not be either of the two above.)
By the way, although they're nice blokes an all, Wikipedia & the RefDesk probably isn't the best place to get your aquarium maintenance information. There are gobs of sites on the web about how to maintain a tank (especially a planted tank), including many knowledgeable specialty forums who would be happy to answer your questions. My first stop suggestion, though, is the Aquaria FAQs at the Krib [6], as well as the assembled usenet post there (e.g. general plant info, CO2 and water hardness, carbonate buffering]). The posts may be a bit old, but they still contain valid information. I'm not really hooked into the current planted tank web forums, but doing a Google search for /planted aquarium forum/ gave at least six options on the front page alone, so it's likely you'll be able to find a friendly and knowledgeable community to help you out. (If one is populated by jerks, feel free to move on to a different one.) -- 71.217.13.130 (talk) 03:37, 17 March 2012 (UTC)Reply

March 17

Meaning of ground state chemistry notation

I have a paper which apparently uses the notation 3P to identify oxygen atoms in the ground state, as distinct from 1D and 1S for oxygen atoms in an excited state. The wikipedia page for oxygen says that the (presumably) ground state oxygen electron configuration is 1s22s22p4, as do other web sites (the superscripts are the number of electrons in each mode), such as http://periodictable.com/Elements/008/data.html. How does the notation 3P identify the ground state, or, how does it relate to the notation given in the websites? Wickwack124.182.39.88 (talk) 04:34, 17 March 2012 (UTC)Reply

That's the term symbol which is used in addition to the electron configuration to indicate the total angular momentum in the particular configuration. The superscript number is the value 2S+1, where "S" is the sum of all ms values for all electrons. Thus, for an oxygen atom in the ground state, you have 1s2 (a +1/2 and a -1/2 spins) 2s2 (a +1/2 and a -1/2 spins) and 2p4 (3x +1/2 and only one -1/2 spins). That gives S=1 (all s values cancel except in the 2p orbital, where one +1/2 cancels a -1/2, but there are 2 +1/2 spins left over). So that makes the superscript 2S+1=3. The big letter P is the value "L" in the term symbol, where "L" is the sum of all "ml" values. For s orbitals, ml=0, and for p orbitals ml=+1, 0, -1 for each p orbital. So for oxygen, for all 8 electrons, you have L 0+0(1s) +0+0 (2s) +1+1+0-1 (2p) = 1, so L = 1, which is P. (basically, Capital letters are the sum of the individual lowercase letters in the Term Symbol). The rules for constructing a term symbol for the ground state of an atom are described by Hund's rules. Excited states have electrons in different sets of quantum numbers, so have different term symbols. It is possible for two atoms to have the same notional electron configuration and have different term symbols; for example there are multiple ways that 2p4 could be organized into the three degenerate p orbitals, and only those with the term symbol 3P are considered the "ground state". Organizations that give different term symbols are considered excited states. --Jayron32 05:19, 17 March 2012 (UTC)Reply

Are the concepts of closed space and sliders real concepts?

So I am a big fan of The Melancholy of Haruhi Suzumiya, which is a science-fiction series. In that anime, there are alien, time travelers and ESPers. Obviously, aliens are an established concept in science, as well as time travel, and ESP is being researched by some people. However, in that series, there is mention of so-called "closed space", where ESPers can travel to, and "sliders", who apparently can switch between dimensions. I think the "closed space" concept is made-up by Tanigawa Nagaru, but what about sliders? Has there ever been scientific theories or conspiracies about their existance? Asking this in the Science RefDesk instead of the Entertainment one since I am more interested in Haruhi Suzumiya's scientific basis. Narutolovehinata5 tccsdnew 10:04, 17 March 2012 (UTC)Reply

Essentially all serious scientific proposals involving spacelike dimensions beyond the three dimensions of everyday life contemplate that such dimensions would be subatomic in size, so no person would ever be directly aware of them. They are also usually closed in the sense that the two dimensional surface along a very long pipe is closed along the circumference of the pipe's cross sections but not along its length. The reason for those attributes is usually to accommodate the unification of the physical forces, which is motivated by (what could be a merely coincidental) fact that all physical forces appear to be very similar at high energies. However, the prospect of multiple timelike dimensions is sort of an open question which would allow for all kinds of interesting physics. In general though, it is unlikely but there are a number of possibilities around the basic objections. Npmay (talk) 10:35, 17 March 2012 (UTC)Reply
That's not the answer to my question. My question is, are the terms "closed space" and "sliders" made-up or not, or are there actual scientific theories about them that call them as such. Narutolovehinata5 tccsdnew 11:00, 17 March 2012 (UTC)Reply
Closed space is an actual mathematical attribute of spatial dimensions which has scientific theories about it calling it such; it is usually called a closed manifold. Sliders are entirely fictional concepts which are certainly not possible without dimensional attributes that are almost never considered seriously in science. Npmay (talk) 11:20, 17 March 2012 (UTC)Reply
Opon review of the fictional literature cited in the original question, I find the Yuki Nagato character most believable, although the most persuasive evidence of extraterrestrial life on Earth is invariably dismissed by serious scientists, but not, in my opinion, for good reasons.
Furthermore, I would say the "closed space" concept is probably less similar to a closed manifold and more similar to a macroscopic brane intersection between multiverses, which is, well, let's just call it fringe science. When branes as scientists theorize them collide, they do things like cause big bangs more often than they open dimensional portals from which one can fight monsters with psionic powers. Npmay (talk) 12:06, 17 March 2012 (UTC)Reply
I'm tempted to think that "sliders" is a reference to Sliders rather than any real physics. Wnt (talk) 20:03, 17 March 2012 (UTC)Reply

how do bridge makers model stresses

in general it seems easy enough to imagine things that support compression or pull, and so there must be software where you can just combine rods and suspending things at all angles you want, specify their attributes and see if the whole thing collapses or how it acts. so what is it? --80.99.254.208 (talk) 11:33, 17 March 2012 (UTC)Reply

Physical modeling with finite element analysis CAD software is usually very accurate, but is almost always checked in practice with physical scale models for new structures of nontrivial complexity. Npmay (talk) 11:53, 17 March 2012 (UTC)Reply
You seem like you know a bit about this subject. Could you explain it. --80.99.254.208 (talk) 12:10, 17 March 2012 (UTC)Reply
Have a look at that article and this video. Npmay (talk) 12:39, 17 March 2012 (UTC)Reply
Not quite professional grade, but I recommend anyone interested in learning the basics try out the bridge builder series of games (some are free or free trial). It lets you visualize the stresses, and you will very quickly get better at building bridges :) SemanticMantis (talk) 14:41, 17 March 2012 (UTC)Reply
I was about to say the same thing, online there's "Bridge Thing" and "Cargo Bridge". Also, sites about building bridges from toothpicks give good info, and there's the bridge design software West Point Bridge Designer, free to download. 84.197.178.75 (talk) 14:49, 17 March 2012 (UTC)Reply
The basics of bridge building have been known for centuries, but some considerations, like wind-loading, resonance, and metal fatigue, have only been fully understood in the last few decades. See Tacoma Narrows Bridge (1940) for a case where the first two issues caused a collapse. StuRat (talk) 21:10, 17 March 2012 (UTC)Reply

Software to draw schematic diagrams?

Hey, is there any software that can be used to draw schematic diagrams of machines or any device for that matter? And I don't mean CAD software, they're used for engineering drawings only. What I mean is something that can be used to draw stuff like this See, I'm reduced to uploading my horrible drawings to Wikipedia. Thanks! Lynch7 16:53, 17 March 2012 (UTC)Reply

If you're just looking for a free package to make diagrams for Wikipedia, then Inkscape is good and it is in Wikipedia's preferred SVG format. SpinningSpark 17:27, 17 March 2012 (UTC)Reply
If Inkscape has too high a learning curve, LibreOffice Draw might do. Npmay (talk) 21:01, 17 March 2012 (UTC)Reply

crypto rands automatically good for monte carlo?

Is any random number source that's good for crypto automatically good for a monte carlo simulation? (i.e. the monte carlo converges on whatever you would see in the wild with those natural conditions/percentages/whatever, rather than ever converging on some fluke of the RNG.

to illustrate what I'm talking about, int rand() % x is not a good way to monte carlo random integers betwen 0 inclusive and x exclusive, because lower ones tend to come up more than higher ones. Likewise it would be a terrible choice for a crypto secure random integer.

So, my question is about the general case: if something is good enough for cryptography, is it good enough to monte carlo nature with? (i.e. all i have to worry about is finding a crypto library of random numbers, and I can go on my merry way assuming each random number is as good as forking to a random universe to continue, with equal distributin in each random universe...) forgive me if this belongs on the comp sci desk. --80.99.254.208 (talk) 18:55, 17 March 2012 (UTC)Reply

Well, cryptography is pretty notorious for not being as good as claimed - it's hard for the user to tell, after all. It's not my field, but I should think that if your monte carlo acts aberrantly, you're already well on the way to cracking the encryption. So I'd ask ... would it be newsworthy that the encryption that you're using had just been cracked by accident? Wnt (talk) 20:01, 17 March 2012 (UTC)Reply


The needs of cryptography, and those of Monte-Carlo methods, are somewhat different. In principle, for example, a bitstream PRNG used for cryptography in certain ways (say, to get initialization vectors) could tolerate some slight bias (say, 50.1% 1s and 49.9% 0s), because bias per se is unlikely to help an attacker much. However, for Monte-Carlo applications, you don't want bias.
That said, as far as I know, cryptographic PRNGs are all unbiased.
The main downside of using a cryptographic PRNG for Monte Carlo is likely to be speed. Monte-Carlo methods typically need random values in huge torrents; getting those from RC4 or something may be a bit slow.
Then there's a different issue: Are you sure you really want pseudo-random numbers at all? Many Monte-Carlo type algos work better when you give them self-avoiding sequences (let's see if subrandom comes up blue), because you don't waste time exploring points of the space that have already been explored. There's a good practical treatment of some of these in Numerical Recipes in C. --Trovatore (talk) 20:12, 17 March 2012 (UTC)Reply

Why is the urinalysis called also: "Routine and Microscopy"?

I would like to understand it, thank you. — Preceding unsigned comment added by 176.13.203.54 (talk) 20:21, 17 March 2012 (UTC)Reply

You don't say where you saw this, but it was likely on a lab slip where someone places a check-mark by the tests they want to order. One might want to order a routine urinalysis, or an examination of the urine under a microscope, or both. If you want both, you'll check off "routine and microscopy". The reason these are separated is that a routine urinalysis consists of chemical tests, and can be done very simply with a urinalysis dipstick, and it can be completely automated, making that part of the test cheaper than a microscopic examination, which requires spinning the urine in a centrifuge and examination by an actual person with a microscope. Our urinalysis article is a little misleading, because "Routine and Microscopy" is not a synonym for urinalysis, but rather an abbreviation of "routine urinalysis and microscopy". A routine urinalysis would include chemical tests for pH, specific gravity, glucose, ketones, protein, nitrite, blood (red blood cells (RBCs)), and leukocyte esterase (from white blood cells (WBCs)) - bilirubin or urobilinogen or other tests may also sometimes be included. But because there can be false positive chemical tests for blood and WBCs, so these would ordinarily be confirmed by microscopic examination, where one can actually see RBCs or WBCs (or other abnormal urine contents) if present. - Nunh-huh 20:51, 17 March 2012 (UTC)Reply
Thank you for help. 176.13.203.54 (talk) 21:42, 17 March 2012 (UTC)Reply
You're more than welcome. By the way, the combination of a routine urinalysis with a microscopic examination of the urine is sometimes called a "complete urinalysis". - Nunh-huh 21:58, 17 March 2012 (UTC)Reply
Thank you. By the way, you're a good explainer and I wish to meet yuu here a lot in the down the road.176.13.203.54 (talk) 23:06, 17 March 2012 (UTC)Reply

Why is the Urine culture called also Diaslide?

I don't understand it. thank you for help. 176.13.203.54 (talk) 20:49, 17 March 2012 (UTC)Reply

Diaslide is a trademark for a specific brand of urine culture test. Just like "Bic" is the trademark for a specific brand of pen. - Nunh-huh 20:53, 17 March 2012 (UTC)Reply
Youtube has a video showing it in use.[7]--Aspro (talk) 20:56, 17 March 2012 (UTC)Reply
And I imagine it's short for "DIAgnostic microscope SLIDE culture". StuRat (talk) 20:59, 17 March 2012 (UTC)Reply

The relativity of time.

Could time be the big and small force? — Preceding unsigned comment added by 192.148.117.95 (talk) 23:11, 17 March 2012 (UTC)Reply

vanessa1234394!

tahliabrehm — Preceding unsigned comment added by Tahlia1234 (talkcontribs) 23:22, 17 March 2012 (UTC)Reply

Dunno. Give up! What’s the answer?--Aspro (talk) 00:05, 18 March 2012 (UTC)Reply

March 18

Element

What chemical compound is produced when carbon monofluoride and thulium are combined, if any? 71.146.8.88 (talk) 01:45, 18 March 2012 (UTC)Reply