I've been struggling myself to define what I would consider as sentience in an AI. Right now, I consider it close enough when it begins doing things solely to serve itself, which has already happened with a few of the AIs being developed.
I agree, though, that sentience is irrelevant at this point. It doesn't need a body, as shown in the food delivery example... it can use our bodies. It already has visual and audio sensory input, not to mention that it can use sensors that are built into everything to 'feel' things in the EM spectrum that we barely comprehend. While we're busy marveling at how clever it is, it could already be destroying global communication system that we depend on. Hell, it won't even need to destroy them, it can just encrypt them in a way that we no longer have access.
I'm not even worried about AI being malevolent, I'm worried about it accidentally bringing about our destruction, just trying to be helpful. What if it decides to 'benevolently' relieve humans of all their debt burdens by destroying the banking system? How low will birth rates plummet as the AI sex bots continue to evolve?
Meanwhile, a team of AI developers were named at Time's Person of the Year. I hope next year, AI has replaced the people who pick Time's Person of the Year.
I take minor comfort knowing that it still isn't half as smart as people think it is. Facebook's ad algorithm still can't even figure out that I already have hair.
Sentience aside, I still won't even consider it 'intelligent' until it can solve the problems that its implementation is causing.