This
level-4 vital article is rated B-class on Wikipedia's
content assessment scale. It is of interest to the following WikiProjects: | |||||||||||
|
The contents of the Gauss–Jordan elimination page were merged into Gaussian elimination. For the contribution history and old versions of the redirected page, please see its history; for the discussion at that location, see its talk page. |
Who put up the goofy picture of a black (actually, green) board and why? I is hard to read and does not add much. Jfgrcar ( talk) 09:11, 5 December 2011 (UTC)
Can somebody clean up the algorithm, its poorly done as is. That and maybe a version in C and FORTRAN which are formal languages. That way clarity is enhanced.
Ideally the algorithm should be able to deal with m by n matrices, so that some who have a square matrix and others with a column augmented matrix can all be accommodated.
I also suggest some links to open textbooks that have details on the topic.
—Preceding unsigned comment added by Veganfanatic ( talk • contribs) 01:22, 21 June 2010 (UTC)
Under the section "General algorithm to compute ranks and bases" the article states:
This echelon matrix T contains a wealth of information about A: ... the vector space spanned by the columns of A has as basis the first, third, fourth, seventh and ninth column of A (the columns of the ones in T), and the *'s tell you how the other columns of A can be written as linear combinations of the basis columns.
This must be an error. The operations in the algorithm preserve the row space, not the column space of A. The column space of T is always spanned by a standard base, and this is not true for an arbitrary matrix A. SanderEvers 08:25, 29 June 2007 (UTC)
The page Gauss-Jordan elimination currently redirects here, with many backlinks, but this article makes no mention of this name except to use it with no explanation about half way down. Does anyone know anything about this term - is it a synonym, an extension, a misnomer? And, as my subject heading says, where does the Jordan bit come from? - IMSoP 13:35, 19 Apr 2004 (UTC)
This article currently mentions reduced row echelon form, but not row echelon form, which it should because the two are not the same thing and because row echelon form redirects to this article. — Lowellian ( talk)[[]] 21:36, Oct 8, 2004 (UTC)
It should also be noted that when I learned this, they said proper row echelon form was in the form:
1 n1 n2 | n3
0 1 n4 | n5
0 0 1 | n6
In other words, it is the same as the row echelon form currently used, but it is reduced so that the first non-zero number in each row is a 1. However, the numbers to the right of the 1s are not reduced, as they would be in reduced row echelon form. —Preceding unsigned comment added by 99.153.132.151 ( talk) 03:19, 12 February 2009 (UTC)
As far as I know (but I'm not an expert on numerical analysis), one major reason why Gaussian elimination is never used for solving large systems in floating-point arithmetic is that it is quite unstable. Any error you commit at any point gets propagated. Iterative method, working on attractive fixed points, are much better in that respect. David.Monniaux 20:37, 15 Dec 2004 (UTC)
All major FEM codes use Gauss or a variant thereof for very large (elastic-linear) systems with more than 100,000 Variables... above comment is no more relevant HH Dec 2011
In the section titled "The general algorithm to compute ranks and bases", it is stated that "This echelon matrix T contains a ..."
The implication is that the example matrix is the T matrix, but this is never explicitly stated. Wouldn't it be clearer if the matrix was preceded by "T = ..." ?
First, this isn't really a bug and most other books/articles publish the three rules for equivalent transformations nearly the same way:
1) Multiply or divide a row by a non-zero value. 2) Switch two rows. 3) Add or subtract a (not necessarily integer) multiple of one row to another row.
1st suggestion for the 1st rule: remove "or divide" and add "real" before "value" (reason: dividing by value 'x' is equivalent to multiplying by value '1/x')
2nd suggestion for the 3rd rule: replace the whole rule by
3) Add one row to another row.
(reason: old rule 3 is a concatenation of rule 1 and new rule 3, or in other words: the old rule 3 contains rule 1)
In case you want to contact me, send a mail to ibbis at gmx dot de.]
I would like to second the simpler rule 3, "Add one row to another row." — Preceding unsigned comment added by Tom-oconnell ( talk • contribs) 00:36, 28 May 2017 (UTC)
The matrices should really be in []s, not ()s. 129.44.216.105 02:24, 9 March 2006 (UTC)
I've edited the rules that determine whether a matrix is in REF/RREF, as they were wrong. Specifically, the article (before my edit) stated that every leading coefficient must be 1 for a matrix to be in REF; This is actually a requirement for RREF. I've also cleaned up the formatting in that area to make the rules clearer. My source for the rules is:
Lay, David C. "Linear Algebra And its Applications". Third Edition - Low Price Edition, Page 30.
If anyone finds a source to the contrary, I would be interested - I don't know where the original information on the page came from.
Braveorca 23:30, 24 May 2006 (UTC)
i've been working on a huge rewrite of this (and the split-offs it spawned, heh) for a couple of days, and it's now done. I think it's all consistent and correct, but I haven't checked the source code examples and proofread very thoroughly (because my head hurts from too much of this). Any changes to fix stuff would be greatly appreciated. I think the rewrite fixes some major problems of the old one (no offence :)), making it more modular and, most importantly, noting that gaussian elimination is not the same as gauss-jordan (though there are other "fixes" as well). -- Braveorca 02:01, 27 July 2006 (UTC)
I'm not a numerical specialist, but I do know that, although in general you shouldn't compare two floating-point numbers for equality, zero is a special case; it has an exact representation. Is there an algorithmic reason a small epsilon should be used instead of zero?
I think that some of the above statements and the remark in the article are the result of some misunderstanding or fuzzy thinking regarding how floating point works. There's some belief that floating point is a little random or uncertain or imprecise, but this isn't really any more true for floating point than fixed point, such as their special case, the integers. It is true that rounding errors and the lack of an exact representation for most real numbers often requires using a comparison band. However, it isn't true that floating point numbers will always be slightly off, or that they are incapable of exact representation of a particular number--and it isn't true that exact equality should never be used with floating point, which seems to be what is implied here.
I beleive the remark in the article is one of these misstakes, and should be removed. It is essential that the leading coefficient in each row be 1, not just something that is close to 1. And yes, in any sane non-broken fp implementation, dividing a number by itself will yield exactly 1. And yes, any sane non-broken fp implementation has an exact representation of 1. FP implementations don't just produce random imprecisions for no good reason, contrary to popular belief. Also, what specifically does testing for a narrow band around 1 instead of 1 itself add, even if this were a valid test? Dividing the row by a number very close to 1 is unlikely to have any significant negative effect on the result, unless the matrix is ill-conditioned, in which case other sources of error will probably be more significant anyway. Is this supposed to be some sort of ill-conceived performance optimisation suggestion, perhaps?
In fact, I think I'm going to remove the fp eps comment from the article. If you feel it needs to be re-added, please discuss it here. AaronWL 05:47, 10 November 2006 (UTC)
Gauss–Jordan elimination article states:
In other words, Gauss-Jordan elimination brings a matrix to reduced row echelon form, whereas Gaussian elimination takes it only as far as row echelon form. It is considerably less efficient than the two-stage Gaussian elimination algorithm.
Gaussian elimination article states:
Equivalently, the algorithm takes an arbitrary matrix and reduces it to reduced row echelon form (also known as row canonical form).
Stated equivalently for matrices, the first part reduces a matrix to row echelon form using elementary operations while the second reduces it to reduced row echelon form, or row canonical form.
To me it sounds as if though Gaussian elimination is one stage and Gauss-Jordan is two stage. Yet they both contradict this aspect. -- ANONYMOUS COWARD0xC0DE 05:36, 19 January 2007 (UTC)
I think the source of your confusion is that the two operations referred to in each article are different. In the Gauss-Jordan article, when it says "two-stage Gaussian elimination algorithm" it means: you first use Gaussian elimination, then find the solutions by back-substitution. It is this back substitution that is the stage 2. In Gauss-Jordan elimination, you have to work harder to get the reduced row echelon form, but then no back substitution is necessary.
A previous edit mentioned Lay's Linear Algebra and Its Applications as a source; nothing wrong with that but for a deeper treatment with more references, a very readable text is Carl Meyer's Matrix Analysis and Applied Linear Algebra. For more on the stability of Gaussian elimination, I would recommend Trefethen and Bau's Numerical Linear Algebra.
In other words, Gauss-Jordan elimination brings a matrix to reduced row echelon form., whereas Gaussian elimination takes it only as far as row echelon form.It is considerably less efficient than the two-stage Gaussian elimination algorithm.
Equivalently, the algorithm takes an arbitrary matrix and reduces it toreduced row echelon formrow echelon form(also known as row canonical form).
Stated equivalently for matrices, the first part reduces a matrix to row echelon form using elementary operationswhile the second reduces it to reduced row echelon form, or row canonical form.
"The Gauss-Jordan (GJ) method is a variant of Gaussian elimination (GE). It differs in eliminating the unknown equations above the main diagonal as well as below the main diagonal. The Gauss-Jordan method is equivalent to the use of reduced row echelon form of linear algebra texts. GJ requires 50% more multiplication and division operation than regular elimination. However, it can be used to produce a matrix inversion program that uses a minimum of storage. Solving using GJ gives ." — Atkinson, Kendall E., An Introduction to Numerical Analysis, 2e., pp. 522-523. Another point to note is that the pseudocode given in the Gaussian elimination article implements Gauss-Jordan elimination, not Gaussian elimination. Bekant 23:52, 5 February 2007 (UTC)
Let me restate this once again: if back-substitution is part of the Gaussian Elimination algorithm then Gaussian Elimination does bring a matrix to reduced row echelon form and the Gauss-Jordan article must be changed. Otherwise if back-substitution is not part of the Gaussian Elimination algorithm then Gaussian Elimination brings a matrix to JUST row echelon form and the Gaussian Elimination article needs to make explicit that back-substitution is NOT part of the algorithm. -- ANONYMOUS COWARD0xC0DE 02:40, 18 February 2007 (UTC)
Gaussian Elimination article:
Equivalently, the algorithm takes an arbitrary matrix and reduces it to row echelon form.
A related but less-efficient algorithm, Gauss–Jordan elimination, brings a matrix to reduced row echelon form, whereas Gaussian elimination takes it only as far as row echelon form.
Stated equivalently for matrices, the first part reduces a matrix to row echelon form using elementary operations while the second reduces it to reduced row echelon form, or row canonical form.
At the end of the algorithm, we are left with That is, it is in reduced row echelon form, or row canonical form.
Gauss-Jordan elimination article:
In other words, Gauss-Jordan elimination brings a matrix to reduced row echelon form, whereas Gaussian elimination takes it only as far as row echelon form.
Sorry guys you haven't convinced me that these contradictions are non-existent. I am getting tired of people ignoring blatant contradictions. Please open your eyes and fix it or I will (someday). -- ANONYMOUS COWARD0xC0DE 02:10, 18 May 2007 (UTC)
See headline —The preceding unsigned comment was added by 63.243.91.254 ( talk) 18:36, 9 February 2007 (UTC).
Why is the citecheck tag on this article? Which citations need support? What citations would you like added? Would you like to see a reference to a layman (undergrad) introduction to Gaussian elimination or to some more advanced material in numerical linear algebra? I could not find any information about it in the talk page. Maybe I could help. Fph 13:31, 19 March 2007 (UTC)
I think more information about partial pivoting should be added. As the page is now, pivoting is only explained through a pseudo-code algorithm, and is never properly introduced. I suggest adding an appropriate section. Fph 13:36, 19 March 2007 (UTC)
Reading over the intro paragraph to this article, it struck me as taking a long time to get around to revealing exactly what Gaussian elimination is, so I fixed it. Now, I'm an English person, not a math person, so please check over it and fix any factual errors I may have inserted. I just felt the textual style could be cleaned up a bit. Thanks, Applejuicefool 14:24, 10 May 2007 (UTC)
I would like to post comments on this article. In general, I have found Wiki to be exceptional and outstanding in its content. However, this particular article has left me quite perplexed. Please take my comments with a grain of salt, I do not know everything about matrix algebra. The first source of confusion is that in textbooks, online course notes (from professors), and hundreds of online references, most people today think of Gaussian elimination as an algorithm that, essentially, computes PA = LU. While this is mentioned in the article, the article seems to actively support the notion that the correct way of thinking about Gaussian elimination is as a factorization into ST. This is perplexing given that the three textbooks I have describe GE as a factorization into P, L, and U. As I mentioned, searching online provides the same discussion, be it class notes, theses, etc. Further, the general GE algorithm most practitioners are familiar with is only one "part", not two parts as the article claims. This one part computes the familiar LU decomposition that we would all find in textbooks, online pseudo-code, etc. It struck me that perhaps the author found the one textbook that truthfully proved that Gauss actually thought of both parts. All the textbooks I have describe only the first part. Considering the algorithm as a two-part entity thus seems to me to be highly likely to cause confusion for many other individuals besides myself.
(You are correct. The article uses nonstandard notation. The proper notation is LU. Jfgrcar ( talk) 06:34, 18 February 2012 (UTC))
Then the article claims that Gauss Elimination converts a matrix to row echelon form. The article also points out that a less efficient algorithm, Gauss-Jordan Elimination, converts a matrix to reduced row echelon form. I agree with these statements completely, as my understanding is the same. This distinction is made in the introductory section, before the table of contents for the article. Unfortunately, in the Example section, the article clearly states that at the completion of the Gaussian Elimination algorithm (after the second part), the resulting matrix is in reduced row echelon form. At the very least, this is confusing. It seems like the author(s) were, in some cases, familiar with the well-known GE algorithm, which always leaves matrices in row echelon form, regardless of whether they are augmented or not. If the discussed two-part algorithm always leaves matrices in reduced row echelon form, then at least the last sentence of the introduction must be changed. But I am suspicious of the authorship; It seems like half of the article was written with the commonplace understanding of the GE algorithm, while the last section of the example was written with the unexplained ST factorization. You will note in the Example section of the article, we move from row echelon form to reduced row echelon form with no explanation at all. Naturally, we may all be sufficiently familiar with matrix algebra to perform this missing step without concern, at least on paper. This missing transformation method has caused me quite a ruckus, as I am a numerical matrix library fellow. The article assumes I can correctly derive the algorithm needed to perform the last step myself (as if, to someone who is implementing a matrix library, this would be obvious in the general case). I beg to disagree.
Another problem is that the article states that the rank of a matrix can be trivially computed by counting the non-zero rows after the GE algorithm. Again, this would be true if GE computed the reduced row echelon form. However, this is almost never true if GE, as most textbooks describe it, only produces row echelon form. Given the confusion stated above, this is even more confusing. This might sound like a trivial objection. In fact, this fact is the only reason I am writing this post. I implemented the commonplace GE some time ago, and was baffled as to how it could be stated that rank only needs to count the non-zero rows. In practice, most matrices that are factored into LUP using the commonly-thought-of GE algorithm never have zero rows.
Lastly, the pseudo-code listing really hits me as unnecessarily obfuscated. I cannot even divine where the so-called "first part" is, let alone the "second part". Also, I believe pseudo-code should be so simple we can almost immediately implement and, to at least some degree, understand what is going on. The provided pseudo-code is opaque to me. The usual GE pseudo-code listed in textbooks is 8 lines, each of which can be trivially understood. The provided pseudo-code is 23 lines, almost triple the size. I cannot understand the intent of it at all. Therefore, as pseudo-code, can it be said, without a doubt, that it is an absolutely useful contribution to the article?
In light of these observations, I urge that the article be modified to provide the well-known and commonly accepted GE algorithm, along with its pseudo-code. If the purpose of the article was to demonstrate that the well-known GE algorithm can be expanded to include a post-processing algorithm to produce reduced row echelon form, this should be clearly stated and explored as a sub-topic of the article, and, in particular, it should be very clearly discussed how the post-processing algorithm of the commonplace GE algorithm is more efficient than the mentioned (but less efficient) alternative, Gauss-Jordan elimination. 67.164.52.124 07:59, 12 May 2007 (UTC)
I was looking for this page for a class I'm taking and had trouble finding it because I was typing "Gausian" instead of "Gaussian". This seems like a fairly common misspelling; shouldn't there be a redirect for this spelling? 207.233.124.3 21:19, 15 May 2007 (UTC)
I made that same booboo I made last time, and noticed the redirect. Thank you, Mr. Niesen. 71.137.6.98 23:34, 17 May 2007 (UTC)
In the Analysis section, the article notes that the complexity of Gaussian Elimination on an nxn matrix is O(n^3). What is the complexity on an nxm matrix? —Preceding unsigned comment added by 68.106.184.113 ( talk) 19:53, 19 December 2007 (UTC)
comment: If the matrix is not square, it is likely that there is either no solution, or an infinite number of solutions. Thus, gaussian elimination does not usually lead to a unique solution. In these cases, the algorithm is usually modified to return some
"common-sense" equivalent of an answer, and these algorithms may have different run-times than the standard gaussian-elimination procedure. —Preceding
unsigned comment added by
146.186.131.40 (
talk)
17:01, 14 April 2010 (UTC)
This article sais that in practice, matrix inversion is rarely used since we really want to solve the system of linear equations. But this is exactly it!! Solving a system of linear equations is nothing but inverting its matrix... —Preceding unsigned comment added by 89.3.223.27 ( talk) 21:54, 16 April 2008 (UTC)
"In practice, inverting a matrix is rarely required. Most of the time, one is really after the solution of a particular system of linear equations"
I didn't really understand this article. I'm not a math genius or anything, but isn't that the point of Wikipedia, to make information accessible to laypeople? I mean we've got textbooks to teach the technical details to math students and so on. -- 98.214.255.102 ( talk) 05:51, 1 July 2008 (UTC)
The article, in the Example section, states:
"This algorithm works for any system of linear equations."
I think it needs to be made clear what "works" means. For example, consider the equations
x - y = 1
y - z = 1
z - x = 1
and attempt to apply the algorithm of the example: a contradiction is reached. Daqu ( talk) 00:59, 24 February 2009 (UTC)
According to the article about row echelon form, rows should begin with 1. The example from this article contradicts that.
The parallel psuedocode section actually just contains C code that's not exactly newbie friendly. I think this section is too complex to be "obviously correct," therefore needs to be cited. There's an implementation that we could link to on google code that makes a bit more sense to my mind [although in pivot column selection, abs isn't used, and it's not really commented any better].
At first glance, the issues with the code that's there now are that curly braces are missing (as written, k = matrix_demention
will be used for the main body of gauss
, although it appears the body was intended to be contained within the if (thread_id==
body) and that dimension is misspelled as demention. There are also more minor errors (barrier_t
not described, no mention of pthread_attr_init
, the barrier is hit twice without explanation, and I don't see the pivot column selection so I'm not sure if the algorithm described actually is Gaussian).
Can someone with more math knowledge review this section? Does it even belong here? rev where it was added
Thatch ( talk) 22:39, 11 October 2009 (UTC)
This code is barely readable and should be corrected.
1. The comment at the top explains what the text following it does, while the rest of the comments explain what code should be implemented in that spot (but the code itself is neglected). This is inconsistent and confusing.
2. We shouldn't have words explaining what the code does (that is what the rest of the article is for!). We should have an implementation of the code instead. It would be ideal if the words in the code only served to describe what the code is doing, instead of replacing the code.
3. In most languages, you start counting at zero, not one, yet this convention is broken here. I think it would be best if we tried to make it as close to real-world code as possible.
4. I think the code would be easier to read if it was constructed in for loops like the parallel code below it. This way, the two can be more easily compared. They should also use the same variable names where applicable, and access the contents of a 2D array the same as well i.e. use this notation matrix[a][b] not A[i,j]. I think the problem lies primarily with the first code while the second is a lot easier to read.
Well, I guess if you are used to code written in Fortran, this might be more tolerable to you. However, I think that most people don't know Fortran and thus aren't used to its weird conventions AND of those who do know it, they tend to agree that it is difficult code to read and maintain in the first place. So, why don't we try moving this code into something a little more user-friendly?
159.242.229.96 ( talk) 19:02, 16 December 2009 (UTC)
May I change it from "Top priority" (seems not appropriate to me) to "High priority" and from "Start class" to "C"? Franp9am ( talk) 21:16, 28 August 2011 (UTC)
The History section clearly indicates that the method of Row Reduction had little to do with Gauss. Why should we continue to emphasize this historical misunderstanding (the Chinese discovered the method more than 2000 years ago!!!)? Shouldn't we reduce the role of the name 'Gaussian Elimination' to a synonym that is in common usage? The name 'Row Reduction' is also more descriptive of the process and is in common usage as well. Therefore, I recommend changing almost all occurrences of the name 'Gaussian Elimination' with the name 'Row Reducation' while adding appropriate text to indicate that the name 'Gaussian Elimination' is a common synonym that is historically false and inaccurate. Cjfsyntropy ( talk) 22:32, 15 November 2011 (UTC)
Nobody calls it "row reduction". It is not the purpose of Wiki to rewrite the subject. Jfgrcar ( talk) 06:30, 18 February 2012 (UTC)
I've finished merging on Gauss-Jordan elimination. There were a few sourced claims that I haven't checked the sources for (because I don't have them), so I just copied them over directly.. hopefully in the process I didn't introduce too many mistakes. Mark M ( talk) 09:44, 1 March 2013 (UTC)
Could someone maybe change the matrix in the example to one that has a unique solution? It's been awhile since I've tried to solve a system of equations but since that's what this is used for most of the time (?) wouldn't it be useful to have an example people could work themselves? If the point of reducing the matrix to row echelon form is to make the system easier to solve it would be helpful for people to see that the system has a solution, or at least to point out that the system has no unique solution (and explain why given the form of the matrix). I'd do it myself but like I said, I'm a bit rusty. Lime in the Coconut 17:14, 14 June 2013 (UTC)
Row echelon form should never be merged with Gauss elimination as both are completely different ways.As Gauss elimination is the first step of Gauss Jordan method which is used to find system of equations.Row echelon is used only for finding Rank of a matrix.If any confusion then please consult me. — Preceding unsigned comment added by Praveen.ujjain ( talk • contribs) 07:48, 25 August 2013 (UTC)
The article says: "This algorithm can be used on a computer for systems with thousands of equations and unknowns. However, the cost becomes prohibitive for systems with millions of equations. These large systems are generally solved using iterative methods." That link to iterative methods isn't very specific. Can we get a reference about using iterative methods (or a better link)? RJFJR ( talk) 01:12, 11 February 2014 (UTC)
Plese, someone fix that — Preceding unsigned comment added by 200.129.202.130 ( talk) 19:30, 10 April 2014 (UTC)
The comment(s) below were originally left at Talk:Gaussian elimination/Comments, and are posted here for posterity. Following several discussions in past years, these subpages are now deprecated. The comments may be irrelevant or outdated; if so, please feel free to remove this section.
There needs to be an article somewhere that explores how influential the efficient solution of linear equations has been. This may be it. Geometry guy 19:51, 10 June 2007 (UTC) |
Last edited at 19:51, 10 June 2007 (UTC). Substituted at 02:08, 5 May 2016 (UTC)
The article states that the bit complexity is exponential, based on cited paper [9], i.e., "On the worst-case complexity of integer Gaussian elimination" by Fang and Havas ( pdf). After looking over this paper, I realized their Gauss elimination pseudo-code is not very similar to the one provided in the article. Instead of multiplying the pivot row for iteration by like in the article, they use something like DIV , using modulo arithmetics. After subtracting this multiplied by row from row , the value does not become zero but MOD . Next, they can change the pivot for column , by searching for the row with the lowest . This can be , because MOD (using the initial ).
However, I wonder what is the bit complexity for the pseudo-code from the article. I heard it is , because the Gauss elimination process can encounter numbers stored on bits. Daniel.porumbel ( talk) 18:50, 27 July 2017 (UTC)
It's easy to modify to handle arbitrary matrices. If everything in col k below row k is 0, try the col to the right and continue pivoting. I haven't found a pseudo-code reference for this but I'm confident it's correct in theory. If anyone can find a reference and update the page I'd be grateful. Here is an example:
1 0 0 0 row k * * 1 0 * * * 1 col l^
Wqwt ( talk) 18:48, 13 March 2018 (UTC)
In the Generalizations sections, there is this quote: "Computing the rank of a tensor of order greater than 2 is NP-hard.[12] Therefore, there cannot be a polynomial time analog of Gaussian elimination for higher-order tensors (matrices are array representations of order-2 tensors). " The second sentence follows from the first only under the assumption P≠NP. Either that or the linked article proves something stronger than stated (i.e., proves that this problem is strictly harder than NP). The best ideals are radical ( talk) 19:35, 5 November 2019 (UTC)
@ D.Lazard: I know multiplying one row with -1 and then adding it to another is the same as subtraction. The objective is to make things as clear as possible to the novice. Hope you have no objection to including an explanation in brackets.-- Sahir 15:08, 6 January 2022 (UTC)
Pseudocode is very Python-ish and uses operator-soup Python-ish notation for ranges and loops, which makes it rather obscure. Would benefit from a more verbose and explicit version. 86.194.93.118 ( talk) 21:46, 12 January 2023 (UTC)
Viewing this page in the iPad app with appearance set to dark or black causes the equations in the "Example of the algorithm" section to be white-on-white. SESteve ( talk) 08:17, 23 March 2024 (UTC)
The French Wikipedia has an entry for fr:Élimination de Gauss-Jordan, which is linked to Gauss-Jordan elimination but not Gaussian elimination; however, the former redirects to an anchor within the latter, so AFAIK there is no way to discover the French entry without guessing that it is called Gauss-Jordan there.
I'm not sure how this should be solved, but maybe Gaussian elimination and Gauss-Jordan elimination should be unified in Wikidata, like they are unified in Wikipedia? — ncfavier 17:58, 17 April 2024 (UTC)