How much quality is too much quality? That is one of the ultimate debates in digital audio. A full answser would take more space than this forum will allow for and would require more research than I'm willing to sit down and do.
Am I not improving the quality at all by using 320 Kbps over 196 or whatever Kbps?
The use of 320kbps over, say, 192 kbps will *Always*, *Always* improve quality. Why? 320kbps is capturing a more accurate image of the original source sound than the 192 is, every single time, no matter what the original source it.
The question is, how important is the gain in quality? If you analyze two mp3s graphically, one with 192 kbps and one with 320 kbps, you will Definitely seem some differences. The higher-quality the source material, the more differences you will see. This is because at lower bitrates, mp3 starts cutting out the detail in the higher frequencies in order to compress the data, even to the point of removing them entirely.
Let's have a little fun, shall we? Here are a couple of images:
Here is a spectrogram (the multicoloured box that is the focus of these images) of an excerpt from "Rescue, Discovery of the Great Valley". It is read as such:
From lowest frequencies (20Hz and lower, "bass") to the highest frequencies (20kHz and higher, "treble"), read from the bottom to the top of the spectrogram. Black signifies little to no data in that frequency range. A slight volume level is indicated by purple, then blue, green, yellow, orange and red. The closer the colour is to red, the more audio content of that particular frequency exists.
Anyway, here are the screenshots: This first one is of a .wav directly ripped from the CD. This is completely lossless and is not compressed in any way. Note the quantity of data in the red box I drew on the image, especially.

Next up is an image of the same part of the music, encoded to a 320 kbps mp3. Again, note the quantity of data in the box I drew. The amount of data in the extremely high frequencies is significantly less than that in the lossless rip. Some data in the extreme ranges of the original .wav is completely gone in the 320 mp3.

Lastly, here is the .wav encoded to a 192 kbps mp3. Here we really see the difference between a 320 mp3 and a 192 mp3. Notice how much data has been removed from the audio in the higher ranges especially. Think: If this much data is removed from the higher ranges, how much have the slightly lower ranges been distorted? How much damage has been done to the audio by using this much compression? The answer, mathematically, is "quite a bit".

Whether or not you can hear this difference depends on your listening style, focus, and "pickyness". It also depends on the quality of equipment you're using to play back the audio (namely, speakers). Low-quality speakers won't show a difference between a 160 kbps and a 320 kbps mp3, whereas a more expensive system will, especially if the listener is listening for this difference.
In reality, mostof what is being cut out by a 192 kbps mp3 is inaudible to the human ear (most people's ears anyway) because it is such high frequencies. Depending on who you are, this difference is more or less obvious. A young child would have an easier time telling a 192 mp3 from a 320 mp3 than a 30-year-old would in most cases.
Even the big companies seem to favor 128 Kbps for their MP3 files in their games.
This is interesting. I can't think of any examples off-hand of games that use 128 kbps mp3s. Are you talking about newer games? Older games may have used more compression in order to reduce the overall size of the game on the hard drive. Most new games don't care about how much space they take up anymore.

: