* Update pipeline word heuristic to work with whitespace in token offsets
This change checks for whitespace in the input string at either the
character preceding the token or in the first character of the token.
This works with tokenizers that return offsets excluding whitespace
between words or with offsets including whitespace.
fixes#18111
starting
* Use smaller model, ensure expected tokenization
* Re-run CI (please squash)
* Attention mask is important in the case of batching...
* Improve the fix.
* Making the sentence different enough that they exhibit different
predictions.