Since most people who are color deficient are not completely “color-blind” the optimal solution for everyone will be quite different. The tools provide a simulation of what the vis will look like, but getting this right is very tricky because it depends on the shape of the cone filter functions, how the shape changes at different light levels, as well as individual differences. Also, the rendering of the colors will depend on the particular display, especially the computations are done in a color space derived from RGB, which uses the display’s primaries to define the gamut of producible colors.
The author of this questions seems to be interested in seeing a simulation of what a color impaired person sees. If the ultimate goal is to create a vis whose information is readable, my advice is always to “get it right in black and white.” With very few exceptions, color deficient people still have perfect luminance contrast judgments and all the information in the black and white view is available to them. In casual viewing, I see that some of the tools designed to create color-safe versions for people with color impairment seem to change the luminance characteristics and therefore ends up creating a less useful representation. This would be an interesting aspect to pursue as a research topic.
Another place to look for supporting research is in imaging. Here, the topic is called “Daltonization (Dalton was color-blind,)” and focuses on the transformation of images so that people with different anomalies can appreciate them.
For future research, one area to explore is whether an overall transformation is the best way to go.
What if the transformation were adapted to where the important information is?