From Talking brains
The findings were unexpected from the perspective of the mirror neuron theory of action understanding, at least with the deaf group. The hearing subjects showed activation in the expected visual-related areas in the ventral occipito-temporal region as well as in the fronto-parietal “mirror system”. This was true both for meaningful (pantomimes) and non-meaningful (ASL verbs for the hearing group) stimuli. So “understanding” isn’t what’s driving the mirror system — but we knew that already from previous work on viewing meaningless gestures. Surprisingly, the deaf signers did not activate the mirror system during the perception of pantomimes at all, and only in a small focus in Broca’s area during the perception of ASL verbs. Comprehension performance on pantomimes assessed after the scan was equivalent for deaf and hearing groups.
It is unclear to me why the two groups should differ so dramatically, but it is clear that you don’t need to activate the “mirror system” to understand actions.