though analog mediums have been described as "warmer" than their digital counterparts.
That is actually possible to prove mathematically. The question is, are the gains from using LPs worth the hassle of using LPs? Maybe. It depends on the situation.
Is it even worthwhile using LPs if you're going to be converting them to a digital format? No, probably not unless you have an incredibly good LP player that will keep the sound very very high-quality and you have a recording device (aka. sound card) that will record with enough resolution to justify the use of an analog source.
Why is analog considered "warmer"? Check out the image below.
Here is an image which compares an analog signal to a digital signal:

Notice how the digital signal is "sharper" and not as smooth as the analog line? That's why the analog source sounds "warmer". B)
There are a couple of main reasons why analog is normally considered "better-sounding" than digital:
1) The analog signal captures every single instant of the source sound. The digital signal only captures "Samples (snapshots) of the sound at set intervals (44,100 times per second in the case of CD-quality digital audio). As a result, the sound wave created by the analog signal is smoother and "warmer" than the digital soundwave. Fact of life, the analog signal is a smoother line than the digital signal. It always will be.
2) This second reason relates back to the fact that digital audio "samples" the source sound instead of capturing every instant in time. The maximum frequency that can be present in a digital signal is equal to half of the sampling rate. To create a sound wave digitally, you need to know two points in time: the top of the wave and the bottom of the wave. Because you need two points to create one cycle in the signal wave, the maximum frequency of the wave is half of the sampling rate. The wave can really only be considered accurate when the frequency is closer to 1/4 of the sampling rate because the timing of the sampled points may not be equal to the peaks and troughs of the actual soundwave being recorded. As a result, the digital signal distorts the original audio. For example:
Digital audio on a CD is sampled at a rate of 44,100 samples per second. This means that the highest possible frequency that can appear in an audio file is 22,050 Hz (cycles per second). It also means that the accuracy of the waveform above about 12,000Hz starts to get questionable. It's "good enough" in most cases but as the frequency of the sound starts to go higher, the less accurately it is captured. This starts to give a "sharper", "harsher" sound that people will associate with digital music. This is why studios typically record their sound with sampling rates of 48,000Hz, 96,000Hz, or 192,000Hz. It brings the digital data closer to a smooth line, an "analog" sound with less distortion in the highest ranges of the music. You can think of analog as being like a digital sound with an infinite sampling rate. The higher the sampling rate, the closer the digital sound will be to an analog-like sound. Petrie still thinks I'm nuts for trying to justify recording over 44,100 Hz (claiming that only his pet bat can hear the difference

) but that's my reasoning.
Imagine there being twice as many dots, or 4 or 8 times as many dots on that picture above. How much closer would that digital line be to the analog one? Exactly. B)
On the other side though, digital has countless elements which make it preferrable in many cases:
1) It's easily portable and storable on electronic media.
2) It's virtually immune to interference from external sources (such as radio waves and nearby power sources and power lines). Analog signals can pick up waves in the air and electrical noise from voltage sources and transmit them over the signal line to the output device (in this case, the speakers).
3) It's resistant to physical damage to the media (damage an analog source at all and you change the sound. You can scratch a CD a little and it will still sound exactly the same as long as the digital data is readable at all).