Saturday, July 9, 2011

Linear Algebra: finding a basis for a subspace of $\mathbb{R}^4$

Question

So I am stuck on this example from my Into. Linear Algerbra book

is example from my Into. Linear Algerbra book:

I'm not exactly sure how I'm supposed to find the basis in this case. Am I just supposed to use a random t and s value and call the single vector a basis? (There were no previous examples in the book that were similar)

Thank you

Answer

Virtuoso, you have two free variables to choose from in forming a vector in a subspace $\mathbb{W}$, namely $s$ and $t$, they determine your vector. So, we know that we can represent any vector in $\mathbb{W}$ as $\begin{bmatrix}2s-t\\ s\\ t\\ s\end{bmatrix}$ = $\begin{bmatrix}2s\\ s\\ 0\\ s\end{bmatrix} + \begin{bmatrix}-t\\ 0\\ t\\ 0\end{bmatrix}$. Here, I am just decomposing any generic vector in our subspace into the independent components -- you can't break it down any further, as it is determined by these two variables. Now, you can pull out $s$ and $t$, to get that any vector in $\mathbb{W}$ can be represented as $s \begin{bmatrix}2\\ 1\\ 0\\ 1\end{bmatrix} + t \begin{bmatrix}-1\\ 0\\ 1\\ 0\end{bmatrix}$, where you can pick any $s, t \in \mathbb{R}$. In other words, vectors $\begin{bmatrix}2\\ 1\\ 0\\ 1\end{bmatrix}$ and $\begin{bmatrix}-1\\ 0\\ 1\\ 0\end{bmatrix}$ span the subspace $\mathbb{W}$, and since they are linearly independent, they form a basis for your subspace. Since the dimension of the subspace is equal to the number of linearly independent vectors needed to span the subspace (or, alternatively speaking, the number of basis vectors), you can infer that the dimension of $\mathbb{W}$ is equal to 2.

Note, that this is not the basis, as there is no such thing -- there are infinitely many combinations of two vectors that would form a basis for $\mathbb{W}$. All you need is for these two vectors to be (1) linearly independent and (2) contained in the subspace $\mathbb{W}$. So, you could, for example, multiply our basis vectors by any real numbers you want, as one way of getting a new couple of basis vectors.

Questions about right ideals

Question

The question is from the following problem:

Let $R$ be a ring with a multiplicative identity. If $U$ is an additive subgroup of $R$ such that $ur\in U$ for all $u\in U$ and for all $r\in R$, then $U$ is said to be a right ideal of $R$. If $R$ has exactly two right ideals, which of the following must be true?

I. $R$ is commutative.
II. $R$ is a division ring.
III. $R$ is infinite.

I know the definition of every concept here. But I have no idea what is supposed to be tested here.

  • Why is the ring $R$ which has exactly two right ideals special?
  • What theorem does one need to solve the problem above?

Edit: According to the answers, II must be true. For III, $R$ can be a finite field according to mt_. What is the counterexample for (I) then?

Answer

The trick here is to see that $0$ and $R$ are always right ideals. $R$ is not equal to zero, then there would only be one right ideal, so every ideal must be either $0$ or $R$. So you can prove that every non-zero element has an inverse, since for $a\in R-\{0\}$ we have $aR$ is a right ideal, hence $R$, so there is an $r\in R$ with $ar=1$.

Edit: It is equivalent for a ring to have precisely two right ideals and it being a division ring. Since there exists finite fields and (only infinite non-commutative) division rings, I and III are ruled out. The arguemt why a division ring has exactly two right ideal is the following. (Repeated from a comment below.) Again $0$ and $R$ are right ideals . Assume there is a right ideal $I$ with a non-zero element $a$. Then there is $a'\in R$ with $aa' =1$ (an inverse) therefore $1 \in I$, hence $I=R$.