Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Nabla symbol "∇" read as "Gradient of" in inappropriate contexts #288

Closed
lnelson2382 opened this issue Sep 26, 2024 · 30 comments
Closed

Comments

@lnelson2382
Copy link

In calculus and physics, the Nabla symbol "∇" by itself represents the gradient of a function, but if it is followed by a "⋅" or "×" symbol it instead represents the divergence or curl of that function respectively. Currently, the sequence "∇⋅f" is read as "(the) gradient of dot eff"; ideally, it would instead be read as "(the) divergence of eff" or "div eff", but it would also be understandable as "del dot eff" or "nabla dot eff". Would it be possible to change the pronunciation of this symbol, either by default or using a togglable setting similar to MultSymbolDot?

@NSoiffer
Copy link
Owner

Apologies that this has taken so long to get to.

You can find a beta version of MathCAT here with div, grad, and curl supported.

Let me know if you find problems with this or anything else in this beta.

@NV-Codes
Copy link
Contributor

NV-Codes commented Dec 31, 2024

MathCAT Version

MathCAT-0.6.6 (NVDA Add-On)

Update on Issue

"Curl" is properly verbalized as such, but "divergence" is not. Moreover, the bold and vector versions of nabla should be supported, as many textbooks, reference materials, etc., choose to use bold nabla ($\mathbf \nabla$) and vector nabla ($\vec{\nabla}$).

Expression Current Verbalization Expected Verbalization Matches
$\nabla f$ "gradient of f" "gradient of f" Yes
$\nabla \cdot \mathbf f$ "the gradient of times bold f" "divergence of bold f" No
$\nabla \times \mathbf f$ "curl of bold f" "curl of bold f" Yes
$\mathbf \nabla f$ "bold nabla f" "gradient of f" No
$\mathbf \nabla \cdot \mathbf f$ "bold nabla times bold f" "divergence of bold f" No
$\mathbf \nabla \times \mathbf f$ "bold nabla times bold f" "curl of bold f" No
$\vec{\nabla} f$ "the gradient of with right arrow above f" "gradient of f" No
$\vec{\nabla} \cdot \vec{f}$ "the gradient of with right arrow above times vector f" "divergence of vector f" No
$\vec{\nabla} \times \vec{f}$ "the gradient of with right arrow above times vector f" "curl of vector f" No

Note on GitHub Markdown

GitHub does not seem to render $\mathbf \nabla$ properly, but rows 5-7 use the bold nabla symbol.

@NSoiffer
Copy link
Owner

NSoiffer commented Jan 1, 2025

Reopening to deal with other notations.

@NSoiffer NSoiffer reopened this Jan 1, 2025
@NV-Codes
Copy link
Contributor

NV-Codes commented Jan 4, 2025

Also, I think that the default verbalization of the $\nabla$ symbol should be "nabla" instead of "the gradient of" (i.e., in Navigation mode and in all the contexts outside of "gradient," "divergence," and "curl"). For example, character-by-character navigation of $\nabla \times \mathbf f$ should probably be "nabla," "cross," "bold f" instead of "the gradient of," "times," "bold f."

@NSoiffer
Copy link
Owner

NSoiffer commented Jan 5, 2025

I agree that "nabla" should be used, but remain uncertain about what to say for ×.

I've opened an issue on what to call some symbols in character navigation mode

@NV-Codes
Copy link
Contributor

NV-Codes commented Jan 6, 2025

That sounds good! I also wanted to add that a decision could be made to use either "nabla" or "del."

According to Edwin Bidwell Wilson's Vector Analysis (1901):

"This symbolic operator ∇ was introduced by Sir W. R. Hamilton and is now in universal employment. There seems, however, to be no universally recognized name for it, although owing to the frequent occurrence of the symbol some name is a practical necessity. It has been found by experience that the monosyllable del is so short and easy to pronounce that even in complicated formulae in which ∇ occurs a number of times, no inconvenience to the speaker or listener arises from the repetition. ∇V is read simply as 'del V'."

[Sourced from the Wikipedia article entitled "Nabla symbol."]

In fact, MathCAT says "del f" at "Terse" verbosity wherever it would say "gradient of f" at higher verbosity (for both SimpleSpeak and ClearSpeak).

@NSoiffer
Copy link
Owner

NSoiffer commented Jan 6, 2025

In fact, MathCAT says "del f" at "Terse" verbosity wherever it would say "gradient of f" at higher verbosity (for both SimpleSpeak and ClearSpeak).

I don't remember doing that, but good to hear that I did the right thing :-)

@lnelson2382
Copy link
Author

I've occasionally heard of the name "del" referring to the $\partial$ symbol rather than the $\nabla$ symbol, so maybe the default should be "nabla" with "del" being reserved for terse verbosity

@NSoiffer
Copy link
Owner

NSoiffer commented Jan 8, 2025

@NV-Codes, @lnelson2382: I've build a new version of the NVDA addon with the latest fixes. I'd appreciate it if you could try it out and let me know if the fixes work for you or if you find other problems. There are lots of changes since the previous release. You can a list of most of them at %AppData%\Roaming\nvda\addons\MathCAT\doc\en\readme.html after you install the new addon.

@NV-Codes
Copy link
Contributor

NV-Codes commented Jan 9, 2025

How can I try the new build? The NVDA add-ons site only has a link to the last stable version (0.6.6).

@NV-Codes
Copy link
Contributor

NV-Codes commented Jan 9, 2025

I believe I found it at https://github.com/NSoiffer/MathCATForPython/releases/tag/latest!

I will try it out and let you know.

@NV-Codes
Copy link
Contributor

NV-Codes commented Jan 10, 2025

Observations using MathCAT-0.6.7:

  • It seems that the Laplacian operator is read as "gradient of squared" in such cases as $\nabla^2 \psi$ instead of "nabla squared." The Laplacian represents the divergence of the gradient; either "Laplacian" could be verbalized or simply "nabla squared."
    • The nabla symbol is also verbalized as "gradient of" in Navigation Mode for the above expression instead of "nabla."
  • $\mathbf \nabla$ (bold nabla) is still not treated appropriately in contexts of gradient, divergence, or curl. Both regular nabla, $\nabla$, and vector nabla, $\vec{\nabla}$, notations are handled correctly.
    • The nabla symbol is verbalized as "gradient of" in $\nabla$ and $\vec{\nabla}$ (gradient of with right arrow above). "Nabla" should likely be used in Navigation Mode in general, and $\vec{\nabla}$ could be called "vector nabla" (much like "vector f" for $\vec{f}$) instead.
  • When checking for vectorial operands of $\times$ and $\cdot$, nabla should also count as a vector. Although the conditions for "gradient of," "divergence of" (currently "d i v of"), and "curl of" should override verbalization of "dot (product)" and "cross (product)" when they apply, expressions such as $\nabla' \times \vec{r}$ ("nabla prime cross (product) vector r" instead of "gradient of prime times vector r") should be supported.
  • The divergence is read as "d i v of:; "divergence of" would be clearer.
  • The new (semantic) zooming in Navigation Mode does not allow one to focus on the $\nabla$ symbol in gradient, divergence, or curl unless using "Character" Navigation Mode or LiteralSpeak; perhaps this is the intended behavior.
  • It would be more concise (and in line with terseness) if "dot product" and "cross product" were simply called "dot" and "cross" in "Terse" verbosity (much like tersely verbalizing "partial" instead of "partial derivative"). It should be noted that "dot" and "cross" are often used in classroom instruction, as is "partial," and this similarity to classroom terminology is one strength of "Terse" verbosity.

@NV-Codes
Copy link
Contributor

NV-Codes commented Jan 11, 2025

A number of expressions are not read as expected:

$$\int_V d^3 r \nabla \times \mathbf A = \int_S dS \hat{\mathbf n} \times \mathbf A$$

  • The curl of bold cap A is not recognized on the LHS.

$$\int_S dS \hat{\mathbf n} \cdot \nabla \times \mathbf A = \oint_C d \ell \cdot \mathbf A$$

  • On the LHS, MathCAT correctly recognizes "bold n hat dot product curl of bold cap A"! On the RHS, the dot product is not verbalized because the author chose to use "script l" to mean a vector, but that is not MathCAT's fault. In such cases, having different terms for the and × symbols (at least in Navigation Mode) would help.

$$\int_S dS \hat{\mathbf n} \times \nabla \psi = \oint_C d \ell \psi$$

  • MathCAT does not recognize "bold n hat cross product del psi" on the LHS. In particular, the cross product is not recognized, but "del psi" is recognized.

It seems that there are fewer issues when rendered by GitHub than when rendered by the source. That is likely due to some difference in the generation of the MathML, but the observations above are based on MathCAT's reading of the expressions as rendered by GitHub.


Please note that these integral identities were taken from the "Formula Reference" section of Andrew Zangwill's Modern Electrodynamics.

@NSoiffer
Copy link
Owner

I'm glad you found it. I meant to include the link. Here is the build link for the -addon is in case others want to try it.

Re-opening to address bugs/other related notations noted above.

@NSoiffer
Copy link
Owner

  • It seems that the Laplacian operator is read as "gradient of squared" in such cases as $\nabla^2 \psi$ instead of "nabla squared." The Laplacian represents the divergence of the gradient; either "Laplacian" could be verbalized or simply "nabla squared."

Fixed.

  • The nabla symbol is also verbalized as "gradient of" in Navigation Mode for the above expression instead of "nabla."

Something broke in switching the navigation mode from the keyboard. I tried changing the navigation mode in the preferences dialog and it properly says "nabla". The MS voices didn't speak "nabla" well, so I changed it to "naabla" and they now sound better; espeak is okay with both.

  • $\mathbf \nabla$ (bold nabla) is still not treated appropriately in contexts of gradient, divergence, or curl.

They work for me. Can you show me the MathML that you are using?

  • The nabla symbol is verbalized as "gradient of" in $\nabla$ and $\vec{\nabla}$ (gradient of with right arrow above). "Nabla" should likely be used in Navigation Mode in general, and $\vec{\nabla}$ could be called "vector nabla" (much like "vector f" for $\vec{f}$) instead.

I think I now have this now working as desired.

  • When checking for vectorial operands of $\times$ and $\cdot$, nabla should also count as a vector. Although the conditions for "gradient of," "divergence of" (currently "d i v of"), and "curl of" should override verbalization of "dot (product)" and "cross (product)" when they apply, expressions such as $\nabla' \times \vec{r}$ ("nabla prime cross (product) vector r" instead of "gradient of prime times vector r") should be supported.

I noticed you spell out "d i v" -- it should be "div" as a word. Are you hearing the letters d, i, and v separately?

I don't remember seeing $\nabla' \times \vec{r}$, but I've added both the regular and bold versions as a potential left operand for the vector check

  • The divergence is read as "d i v of:; "divergence of" would be clearer.

Currently I have "divergence of" for verbose mode and "div of" otherwise. I think it makes sense to say "div" (probably not "div of") in terse mode, but let me know if you feel otherwise. It's been a long time since I took vector calculus.

  • The new (semantic) zooming in Navigation Mode does not allow one to focus on the $\nabla$ symbol in gradient, divergence, or curl unless using "Character" Navigation Mode or LiteralSpeak; perhaps this is the intended behavior.

Yes, this is intended. Same for other things like absolute value -- you can't navigate to the vertical bars unless using character navigation or LiteralSpeak because they aren't part of the semantic interpretation. The idea is to allow you to focus more rapidly on the semantic content.

  • It would be more concise (and in line with terseness) if "dot product" and "cross product" were simply called "dot" and "cross" in "Terse" verbosity (much like tersely verbalizing "partial" instead of "partial derivative"). It should be noted that "dot" and "cross" are often used in classroom instruction, as is "partial," and this similarity to classroom terminology is one strength of "Terse" verbosity.

Good idea. I've added them.

As for the other problems you identified, I'm trying to figure out how to handle them. The problem is that ∇× is sort of a two character operator and I'm going to need some hack to deal with that. If I put a special case for it in the MathCAT cleanup code, then it affects the literal speech. But if I do it later, it gets messier and potentially won't cover all the cases. I'll figure something out. I'll let you know when I have a new build with this and the other fixes in it.

@NSoiffer NSoiffer reopened this Jan 13, 2025
@NSoiffer
Copy link
Owner

@NV-Codes: out of curiosity, are you a braille user? If so, which braille code? I know you found that nabla wasn't in UEB, so I'm guessing the answer is that you use UEB rather than Nemeth.

One of the meetings I'm part is deals with ebook readers and is interested to know if high-level math users primarily rely on speech, braille, or both. Anything I can tell them?

Another group is working on trying to improve the accessibility of MS Word's math support. Do you use Word or do you use a text editor and write TeX or something else?

Thanks for any feedback on your usage, experience, and suggestions.

@NV-Codes
Copy link
Contributor

NV-Codes commented Jan 15, 2025

Something broke in switching the navigation mode from the keyboard. I tried changing the navigation mode in the preferences dialog and it properly says "nabla". The MS voices didn't speak "nabla" well, so I changed it to "naabla" and they now sound better; espeak is okay with both.

Yes, I noticed that I had to use the preferences dialog as well in order to change the navigation mode. Based on trying various spellings, "nahblah" also seems to produce a reasonable pronunciation for the word (using Windows OneCore voices with NVDA), but both are comprehensible. Such workarounds are sometimes important, but do they interfere with NVDA's speech output? I do not know whether there is a way to make synthesizer-specific changes to pronunciation from MathCAT's perspective or if there is a way to file an issue/suggestion for the OneCore developers.

They work for me. Can you show me the MathML that you are using?

Sure, for $\mathbf \nabla f, \mathbf \nabla \cdot \mathbf f, \mathbf \nabla \times \mathbf f$, Pandoc generated the following MathML (as copied by MathCAT):

<math xmlns='http://www.w3.org/1998/Math/MathML'>
 <mrow>
  <mrow>
    <mi>𝛁</mi>
    <mo>&#x2062;</mo>
    <mi>f</mi>
  </mrow>
  <mo>,</mo>
  <mrow>
    <mi>𝛁</mi>
    <mo>⋅</mo>
    <mi>𝐟</mi>
  </mrow>
  <mo>,</mo>
  <mrow>
    <mi>𝛁</mi>
    <mo>×</mo>
    <mi>𝐟</mi>
  </mrow>
 </mrow>
</math>

I noticed you spell out "d i v" -- it should be "div" as a word. Are you hearing the letters d, i, and v separately?

Yes, I hear the letters separately. It seems that OneCore voices pronounce "div" as "d, i, v." They pronounce "dihv" as was likely intended. The synthesizer seems to make inferences about whether certain words should be pronounced or spelled.

I've added both the regular and bold versions [of nabla] as a potential left operand

It could also be a right operand, as in the following vector identity:

$$\nabla (\mathbf a \cdot \mathbf b) = (\mathbf a \cdot \nabla)\mathbf b +(\mathbf b \cdot \nabla)\mathbf a +\mathbf a \times ( \nabla \times \mathbf b) +\mathbf b \times ( \nabla \times \mathbf a)$$

Currently I have "divergence of" for verbose mode and "div of" otherwise. I think it makes sense to say "div" (probably not "div of") in terse mode, but let me know if you feel otherwise.

The main issue is hearing "d, i, v" as three separate letters. In terse mode, the divergence is still pronounced "d-i-v of," and the curl is pronounced "curl of." Perhaps for consistency (and in line with terse mode more generally), "of" could be removed for both. This should still be clear.

Yes, this is intended. Same for other things like absolute value -- you can't navigate to the vertical bars unless using character navigation or LiteralSpeak because they aren't part of the semantic interpretation. The idea is to allow you to focus more rapidly on the semantic content.

That makes sense!

As for the other problems you identified, I'm trying to figure out how to handle them. [...] I'll let you know when I have a new build with this and the other fixes in it.

Thank you! The pre-release has already been helpful!

@NV-Codes
Copy link
Contributor

I use a combination of speech and braille when possible. At the moment, I mostly like to use SimpleSpeak in terse mode. Particularly for lengthy expressions (and also for vectorial expressions before MathCAT-0.6.7), braille is especially helpful. Though I initially used the UEB code for math content, I've found that Nemeth is usually more concise (i.e., expressions tend to end up shorter because Nemeth can more concisely represent numerals, subscripts, superscripts, etc.). Expressions with vectors that use the arrow notation may be shorter in UEB (because UEB has a shortcut for vectors), but thankfully boldface notation is often used instead (and is concisely represented in Nemeth).

In learning to use both codes for math, I've made use of APH's free online curricula for the UEB Technical Code and Nemeth Code. Since I now primarily use the latter for math, I've also found NFB's Nemeth Symbol Library very helpful.

I typically use a text editor to write documents in Markdown and convert them to HTML using Pandoc, as it makes creating math content, as well as heading structure, lists, tables, footnotes, etc., relatively easy, and it allows the generation of screen-reader-friendly HTML (in addition to other formats as necessary):

pandoc -f markdown -t html --mathml -s <input>.md -o <output>.html

The math is written between (double) dollar signs in LaTeX notation. An issue still needs to be opened regarding Pandoc's generation of MathML (issue #325), as I rely on its conversion, though I do not know which library (if any) it uses for writing MathML.

I sometimes need to work with Word documents, but I find that converting them to HTML with Pandoc first makes navigation of the content easier. Importantly, Pandoc can also convert from pure LaTeX to accessible HTML, which is especially useful if one can obtain the source of textbooks (by contacting the publisher) and papers (from websites, databases, or journals), though it can sometimes be difficult and/or time-consuming to obtain the source TeX.

I hope that

  • publishing such materials in HTML (of course using MathML for math content) becomes the norm for better accessibility (including to allow the use of alt text, perhaps embedded sonifications, and other assistive features).
  • tools like Pandoc (and similar Markdown converters) will be able to generate MathML with intent (at least to the extent doable with LaTeX math source) and will fix current bugs with generation of MathML.
  • working with digital math algebraically can become easier (e.g., for showing work), perhaps by allowing conversion from Unicode Nemeth (including the ⣍ newline symbol) to LaTeX that can be pasted into source documents (instead of either working in LaTeX from the start or working in Nemeth or some shorthand notation and then having to transcribe the work into LaTeX manually).

In terms of reading and proofreading math content, MathCAT has been an incredibly helpful tool because of its speech, braille, and interactive navigation!

@NSoiffer
Copy link
Owner

NSoiffer commented Jan 15, 2025

I confirmed that "div" is being spelled out for OneCore and SAPI5 settings. I'll switch to "dihv" which sounds correct in all the voices, including Eloquence.

I'm not 100% sure what the ∇ symbol should speak as. Some options have a more nasal "a" sound than others, but which is better? I'll go with "nahblah" as it is pretty consistent across the four speech engines I tried and the spelling makes it clearer what it should sound like.

At the moment, the rules don't have a way to access the speech engine although that might be something to add in the future. With these changes, it isn't necessary now.

Thanks for your suggestions on spellings.

@NSoiffer
Copy link
Owner

NSoiffer commented Jan 15, 2025

@NV-Codes: for pandoc bugs, you can file the bug here. Pandoc does its own MathML generation. The author was very responsive when I filed a bug about \perp using the wrong Unicode character.

The problem with Pandoc's generation with nabla is that it is putting it into a mi and not a mo and then thinks it needs to add an invisible times in one case. I've added a cleanup to MathCAT.

@NSoiffer
Copy link
Owner

I've added both the regular and bold versions [of nabla] as a potential left operand

It could also be a right operand, as in the following vector identity:

$$\nabla (\mathbf a \cdot \mathbf b) = (\mathbf a \cdot \nabla)\mathbf b +(\mathbf b \cdot \nabla)\mathbf a +\mathbf a \times ( \nabla \times \mathbf b) +\mathbf b \times ( \nabla \times \mathbf a)$$

I ended up checking both. In the last example from Modern Electrodynamics that you listed, $\hat{\mathbf n} \times \nabla \psi$ doesn't have a vector or boldface on the right. I added nabla to the list that is considered a vector. Does that seems reasonable? It would speak something like "divergence times divergence" as cross product, but how often would that happen relative to something like $\nabla \psi$ being an argument?

NSoiffer added a commit that referenced this issue Jan 15, 2025
Add a potentially modified ∇ as a signal that we have a vector.

These partially address #288
@NSoiffer
Copy link
Owner

@NV-Codes: I think I have fixed all the issues you raised with cross product, div, etc. You can download a version with these fixes from my google drive. Please let me know what remaining or new issues you find. I'll work on figuring out what broke with switching navigation modes in the meantime.

@NV-Codes
Copy link
Contributor

NV-Codes commented Jan 15, 2025

I ended up checking both. In the last example from Modern Electrodynamics that you listed, $\hat{\mathbf n} \times \nabla \psi$ doesn't have a vector or boldface on the right. I added nabla to the list that is considered a vector. Does that seems reasonable?

Yes! The nabla symbol represents a pseudo-vector; in Cartesian coordinates, $\nabla = \frac{\partial}{\partial x} \hat{\mathbf x} + \frac{\partial}{\partial y} \hat{\mathbf y} + \frac{\partial}{\partial z} \hat{\mathbf z}$.

It would speak something like "divergence times divergence" as cross product, but how often would that happen relative to something like $\nabla \psi$ being an argument?

If I understood correctly, then you are asking about multiplying the results of two divergences. Since the divergence yields a scalar quantity, this would simply be multiplication of scalars (neither the dot product nor cross product), and hopefully any author would not use the cross symbol like that in vector calculus texts or similar contexts. It would likely be indicated implicitly by surrounding each divergence in parentheses and having the two parenthetical quantities adjacent, as in $(\nabla \cdot \mathbf A)(\nabla \cdot \mathbf B)$. On the other hand, $\nabla \psi$ is a vector quantity, as the gradient of a scalar field yields a vector field, and it is thus not uncommon for $\nabla \psi$ to appear as an operand in dot products and cross products.

I think I have fixed all the issues you raised with cross product, div, etc. You can download a version with these fixes from my google drive. Please let me know what remaining or new issues you find. I'll work on figuring out what broke with switching navigation modes in the meantime.

That sounds good, thank you!

@NV-Codes
Copy link
Contributor

NV-Codes commented Jan 15, 2025

I was unable to access the file at the provided link, and the most recent version listed on the MathCAT for Python Releases page seems to be from last week. I read that one can clone the repository and perhaps run the "build-nvda-addon.sh" script to always have access to the latest add-on. That might make testing easier for the future as well.

Cloning a repository is relatively simple from the Git command-line program, but I was wondering

  • whether I was right to presume that running the shell script would produce an installable add-on file with the latest changes (as long as I run git pull beforehand).
  • how to run the SH file from Windows (perhaps using the Git program).

I appreciate your help!

@NSoiffer
Copy link
Owner

There are a few repos for the various parts of MathCAT, so building isn't as easy as it could be. Also, it's a little simpler for me to avoid building the addon via github, but it apparently is a little problematic. I've built a new 0.6.7-rc.3 version on github. This fixes toggling the navigation mode and also the navigation speech verbosity. Right now, navigation mostly only echos commands when set to Verbose, but my original idea is that for less common commands such as move to the start/end of line. I'd appreciate what ever suggestions you have as to when to echo commands.

Also, thanks for the info on how you handle math. I mostly hear how those who struggle with math deal with math. It is very useful to hear how expert users with advanced knowledge of math use it and the surrounding ecosystem. Sadly, people with your abilities are the exception rather than the rule. Congratulations on your accomplishments!!!

@NSoiffer
Copy link
Owner

@NV-Codes: I passed along your response about usage to the accessibility group I mentioned and got several responses expressing thanks for your response. Susan Osterhaus asked me to pass along her response to you:

I also noticed that they used the APH UEB and Nemeth Online Tutorials. Mario Cortesi and I authored the original Nemeth Tutorial back in 1997 for Gaylen Kapperman’s Research Development Institute upon which the APH Nemeth Tutorial is based.

I’m also excited that your user found the Symbol Library helpful!! The NFB Symbol Library is actually part of the Nemeth Braille Code Curriculum authored by Tina Herzberg, Sara Larkin, and me. This curriculum was developed in partnership with the National Federation of the Blind (NFB) and Pearson’s Accessibility Team for Assessments. It started out on Pearson’s website, but needed to be moved. NFB attempted to upload it, but the temporary webmaster they hired failed to include everything. Therefore, we moved everything to the Perkins’ Website at https://www.pathstoliteracy.org/nemeth-curriculum/. To get to the complete Symbol Library, you can access it from there or go directly to: https://www.pathstoliteracy.org/nemeth-symbol-library/ It’s nice to know that your user still enjoyed using the symbol library. If you look carefully at the NFB website, you will notice that only the first symbol description of absolute value contains links to our three sample documents, whereas the Perkins’ version has sample documents for all of the entries. I agree that the description is quite wonderful, but we put a lot of effort into creating those examples, and we would like for your user and others to be able to take advantage of them. Neil, could use please pass along this information to your user? By the way, the entire curriculum has three components: Pre-K to Second Grade Nemeth Curriculum, Nemeth Focused Lessons, and Nemeth Symbol Library and it is available at no cost.

Sara, who is also on this email thread, may wish to say more about the Nemeth Symbol Library because it is her baby and very close to her heart!!

@NV-Codes
Copy link
Contributor

Thank you! I'm very grateful for the free and accessible online resources for math braille and of course for MathCAT as well!

@NV-Codes
Copy link
Contributor

NV-Codes commented Jan 20, 2025

Some notes on 0.6.8 RC 2:

  • In $\nabla =\frac{\partial}{ \partial x} \hat{\mathbf x} +\frac{\partial}{ \partial y} \hat{\mathbf y} +\frac{\partial}{ \partial z} \hat{\mathbf z}$, nabla is called "gradient of," but it is correctly called "nabla" in other contexts.
  • In the original table's bottom six entries, only the gradients are recognized. The divergence expressions are respectively spoken as "nahblah dot bold f" and "vector nahblah dot vector f," while the curl expressions are respectively verbalized as "nahblah cross bold f" and "vector nahblah cross vector f." Although inferring divergence and curl in these scenarios is the goal, this phraseology is still much better than the original, and it is unambiguous! Perhaps this is a result of GitHub's particular MathML translation.
  • Using the "Simple" navigation mode, I think it makes sense to treat gradient, divergence, and curl as 2D structures, such that "del," "curl," and "dihv" are pronounced when navigating to the appropriate expressions. I understand that semantic zooming means that only the arguments of these operators will be verbalized upon further zooming, but it seems erroneous for "Simple" navigation to verbalize $( \nabla \cdot \mathbf A)( \nabla \cdot \mathbf B)$ as "open, bold cap A, close, open, bold cap B, close" when moving from left to right (at terse verbosity) instead of "open, dihv bold cap A, close, open, dihv bold cap B, close."
  • I tried to change "Navigation speech to use when beginning to navigate an equation," but it now does not seem to work (i.e., it makes no difference what the value of the setting is, even after restarting NVDA). Previously, the entire expression was read after starting navigation; now only the first term is spoken (using "Simple" navigation), but changing the setting back makes no difference. How is this feature intended to work? Update: The behavior seems to get stuck in one of the two aforementioned modes (seemingly without regard to the user's settings); I presume the former behavior is intended to be the overview, while the latter is intended to be "Speak."

Good news:

  • "Dot" and "cross" are now used as terse shorthand for "dot product" and "cross product."
  • Copying math as MathML, LaTeX, ASCIIMath, and Speech all work!
  • Terse mode now recognizes and verbalizes function notation.
  • Vector nabla ($\vec{\nabla}$ is now succinctly called "vector nahblah."
  • The Laplacian operator is now correctly identified ($\nabla^2 A$ is spoken as "Laplacian of cap A").
    • The vector and bold versions of the Laplacian operator should be supported as common variants as well.
    • In the interest of consistency, "of" could be removed for terse mode, and the Laplacian operation could also be treated as a 2D structure for "Simple" navigation and semantic zooming, but MathCAT's speech is mathematically clear.
    • The (case-sensitive) spelling "LahPlahsian" could be used for better pronunciation by all three default NVDA synthesizers.
    • Currently, one can zoom into the Laplacian, but the superscript 2 is not able to be accessed in "Simple" navigation mode. If the Laplacian operation and its operand are treated as a semantic unit, however, this would not be an issue, unless nabla squared appears alone.

@NSoiffer
Copy link
Owner

@NV-Codes: I think I have the speech issues fixed and also the overview in navigation working. I haven't done much work on overviews, so they probably aren't that useful. One thing that might be confusing is that an overview is given for longer expressions but for shorter ones, it just reads the expression since that would be about the same number of words. The threshold between speaking and overviews needs lots of work.

For navigation in Simple mode, what I've decided to do is to treat semantic interpretations like 2D structures -- they would speak the entire semantics (e.g., "div of bold f"). If you zoom in, you will hear the children just as you would zooming into a fraction, square root, etc. I think this is sensible, but I need to play around a bit more before I'm confident that this really is reasonable... and isn't buggy.

@NV-Codes
Copy link
Contributor

I think I have the speech issues fixed and also the overview in navigation working. I haven't done much work on overviews, so they probably aren't that useful. One thing that might be confusing is that an overview is given for longer expressions but for shorter ones, it just reads the expression since that would be about the same number of words. The threshold between speaking and overviews needs lots of work.

What exactly is an overview? Perhaps rather than having a threshold, MathCAT could simply behave according to the setting, without regard to the number of words. For instance, if an overview gives the number of terms in an expression and a user finds this helpful, then verbalizing "one term" could still be useful. Perhaps there should also be a setting to skip introductory navigation speech altogether and simply start reading the expression's first element (Enhanced mode), "word" (Simple mode), or character (Character mode), as one likely hears the expression in full before choosing to navigate it.

For the overview, simply listing the number of terms in an expression could be useful, similar to verbalizing the number of rows and columns in a matrix, though I am not familiar with MathCAT's current output for overviews. I tend to begin navigating as soon as an expression is entered, but a terse outline could speed up interpretation of complex expressions.

For navigation in Simple mode, what I've decided to do is to treat semantic interpretations like 2D structures -- they would speak the entire semantics (e.g., "div of bold f"). If you zoom in, you will hear the children just as you would zooming into a fraction, square root, etc. I think this is sensible, but I need to play around a bit more before I'm confident that this really is reasonable... and isn't buggy.

Great, that sounds good!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants