QAction and accessibility: how to read text when activated?
-
I'm working on a Qt5 Widgets application which has a
QAction
which can be activated either through the app's menu or through a keyboard shortcut. I want the screen reader to announce the action is being performed whenQAction
is activated. Does anyone know if Qt supports this usecase? If it does, how do we get assistive technologies to pick up on aQAction
being activated? -
I'm working on a Qt5 Widgets application which has a
QAction
which can be activated either through the app's menu or through a keyboard shortcut. I want the screen reader to announce the action is being performed whenQAction
is activated. Does anyone know if Qt supports this usecase? If it does, how do we get assistive technologies to pick up on aQAction
being activated?Hi,
Do you mean take advantage of Qt accessibility support ?
-
Hi,
Do you mean take advantage of Qt accessibility support ?
-
@SGaist that might be it. My goal is to allow assistive technologies (i.e., screen readers like NVDA) to inform the user that an action was executed, or more specifically that a
QAction
was triggered.Does Qt support anything like this?
@rmam
@SGaist gave a link to https://doc.qt.io/qt-5/accessible.html. I have not used it, but does QAccessibleActionInterface Class allow what you want, perhaps on const QString &QAccessibleActionInterface::pressAction()?TBH I don't know how to use it/no more than this, so you may have to wait for someone else to give you guidance on how to use it.
-
@rmam
@SGaist gave a link to https://doc.qt.io/qt-5/accessible.html. I have not used it, but does QAccessibleActionInterface Class allow what you want, perhaps on const QString &QAccessibleActionInterface::pressAction()?TBH I don't know how to use it/no more than this, so you may have to wait for someone else to give you guidance on how to use it.
I have run some tests using orca not so long on Linux and the application was already accessible so it should already be good.
-
I have run some tests using orca not so long on Linux and the application was already accessible so it should already be good.
@SGaist the app I work on targets windows with the global menu bar populated with plain old
QAction
instances, and NVDA doesn't pick up on any action being triggered.Do you mind checking if you have any additional config to get screen readers to react to
QAction
instances being triggered? -
I had some experience with JAWS, basically it reads the text (and eventually tooltip) of the element with focus.
IMHO the simplest method is to put the focus on a label that describes your current element after the action was triggered.
i.e: click on menu Print Preview -> print preview window opens -> onShowEvent: set focus on title "Print Preview"
-
@SGaist the app I work on targets windows with the global menu bar populated with plain old
QAction
instances, and NVDA doesn't pick up on any action being triggered.Do you mind checking if you have any additional config to get screen readers to react to
QAction
instances being triggered?