我收到此错误:
[E088] Text of length 1029371 exceeds maximum of 1000000. The v2.x parser and NER models require roughly 1GB of temporary memory per 100,000 characters in the input. This means long texts may cause memory allocation errors. If you're not using the parser or NER, it's probably safe to increase the `nlp.max_length` limit. The limit is in number of characters, so you can check whether your inputs are too long by checking `len(text)`.
奇怪的是,如果我减少被词形化的文档数量,它仍然说长度超过 100 万。有没有办法将限制增加到超过 100 万?该错误似乎表明存在,但我无法这样做。