Hey, welcome! WebGL is a fun technology. I was originally doing something quite similar, but wanted to switch over to using transform feedback to generate my data. WebGL was based upon the GLES2.0 spec at that time, which doesn't support TF. But it looks like GLES3.0 supports TF, (and GLES3.1 supports a full Compute Shader apparently!), and WebGL2 is built upon GLES3.0. Curious how widely WebGL2 is supported.. but either way, awesome.. I love browser-based tech! Can't wait to see what you're working on.
You probably don't need to have a 'position' map. Just figure out an equation to convert between world coordinates and the texture coordinates. For example, if the upper-left vertex of the patch has texture coordinates (0,0) and the bottom right (1,1), then your patch -> texture coordinates function is simply float2 texCoords = modulus(patchCoords.xy * textureScale, float2(1,1));
where textureScale is the ratio between texture : patch size (i.e. a textureScale value of 4 would repeat the texture 4 times across the patch). You're likely looking for a 1:1 ratio, though, if you're not wanting to wrap/repeat your textures.
The 'resolutions' of the patch and texture do not need to match, however. The texture could be 512x512 texels, with the patch only 32x32, for example, but as long as the patch's texCoords range from (0,0) to (1,1), your texture should map to it correctly.
If you do it this way, you can replace the position data in your RGB channels with the normal vector data, and retain the A channel for height values (that's actually exactly how I do it
). And by packing the height data into the A channel of the normal vector, you end up saving 2x the VRAM. This is because in GLES2.0, unpacked floats and float3s will still consume an entire float4's worth of memory in order to remain byte-aligned.
With regard to tangent vectors (ie tangent and bi-tangent) - if you're working on a regular grid (i.e. spacing between your x and y vertices is equal and constant across the grid), then you can use a couple of nice optimizations in calculating these vectors (to include the normal). Forget about the Jacobian! Here's the algo:
hR = height value of the texel immediately to the right of this one.
hL = height value of the texel immediately to the left of this one.
hU = height value of the texel immediately above this one.
hD = height value of the texel immediately below this one.
float dx = hR - hL;
float dy = hU - hD;
float3 tangent = normalize(float3(texelSpacing, 0, dx * 0.5));
float3 biTangent = normalize(float3(0, texelSpacing, dy * 0.5));
float3 normal = normalize(float3(dx, dy, 2*texelSpacing));
You'll note that:
cross(tangent, biTangent) == normal;
cross(biTangent, normal) == tangent;
cross(normal, tangent) == biTangent;
As expected for orthonormalized coordinate basis. Hooray for loss-less optimizations!
Note that this produces the tan/bi-tan/normal vectors in what I like to call "Patch Space".. i.e. a space whose origin is at the center of the patch (although that's not neccessary), and which is oriented such that the z-axis (in this case, although it could be any of the cardinal axes) is normal to the patch (if it were completely flat). You'll need to transform this patch-space into another coordinate system - such as eyeSpace - if you want to do lighting calculations, etc, but that's a fairly typical practice. I find that having things defined in patchSpace (think of that as a type of modelSpace) as opposed to 'planetSpace' (i.e. origin at planet center, Y axis aligned with North Pole, for example) greatly simplifies multiple algorithms along the way.
Hope that helps! Feel free to ask more questions or share whatever you'd like!
edit: Another nice thing about computing and storing stuff in 'patchSpace' is that it allows you to make lots of assumptions. Here's a useful one:
Many people will store the tan and biTan vectors for each vertex in order to speed up lighting calculations. You may wish to do so, but if memory is the bottleneck and you have spare ALU capabilities, you can cheaply back-solve for the T and B vectors, having just the N (assuming you're in patch space). Check it out:
biTangent = cross(normal, float3(1,0,0));
tangent = cross(-normal, biTangent);
I bet you could do this without the cross products too.. hmm... I'll get back to you on that!
edit2: sure enough! no cross-products required:
float normScaler = (2*texelSpacing) / normal.x;
vec3 reconstitutedBasis = vec3(normal.x, normal.y, normal.z) * normScaler;
float dx = reconstitutedBasis.x;
float dy = reconstitutedBasis.y;
vec3 tangent = normalize(vec3(texelSpacing, 0, dx*.5);
vec3 biTangent = normalize(vec3(0, texelSpacing, dy*.5);
No idea if that's actually cheaper than computing the cross product twice, but I'd bet it is. Also, disclaimer, this is completely untested, 
edit3: perhaps just to spite cybercritic, you can remove 3 multiplications from the above code... just remove the 2*
and each *.5
. Course, it's a bit less readable that way, but I aint judgin :)
edit4: oh yes.. and that tangent space basis is orthonormalized, so:

is equal to

No dealing with expensive matrix inversions! just transpose your basis!
(note you'd have to pre-multiply the Vobj position by the planet/worldSpace --> patchSpace transformation matrix first, to get Vobj into patch space. Then the above matrix will put you into tangent space (for each vertex). Do that conversion from patchSpace to tangent space in the vertex shader, and use the interpolated TBN vectors it gives you in the fragment shader)