Related
Is it possible to send/simulate a bluetooth headset button click from the Gear to the attached phone?
My goal is to have that click activate Google Voice Search on the phone, but all the options I've tried don't work well enough.
1) Sending bluetooth commands through rooted terminal like: echo -e "AT\r" > /dev/ttySAC0
sending any command to ttySAC0 would crash the bluetooth connection. I guess this is a serial interface and isn't designed to accept text commands (I don't have a full understanding of this, just conjecture). Other devices like 'uhid' and 'uinput' didn't work either (everything just returns 1, and I don't know what that means either), I tried all the ones that showed up under 'ls -l /dev/' with 'bluetooth' in their properties.
2) Using tasker+autoremote: works when bluetooth tethering is on but is slow (command going through internet and back), and works when bluetooth direct receiving is enabled on the phone, but this drains battery a lot (Autoremote always awake listening on bluetooth). Neither one of these is a palatable side effect (slow, or battery drain).
3) Using tasker to intercept a command from the Gear media controller app
Works sometimes, I usually use the 'previous' button, but the need to 'Grab' the media button interferes with using it for music playback, and using another tasker task to enable/disable this profile leads to erratic behavior (sometimes the media player, Winamp in this case, will start working immediately, sometimes not). The bigger issue is that the media controller app stops responding after an hour or two, requiring rebooting the phone, or killing some of the Samsung Accessory services running in the background, which then relaunch themselves. I think there's too much interaction here with Tasker, Winamp, and the Samsung Accessory stuff to know what's happening though. The delay is also highly variable, which is annoying, and sometimes when the Voice Search launches I do not get the initial 'ding' audio through the Gear, though there could be a ton of reasons for that as well.
Long story short, I'm hoping being able to simulate bluetooth headset button presses/commands would solve all of the above problems, getting faster response, no/minimal extra power consumption, and be very reliable. Also, this may sound dumb, but I don't actually have a normal bluetooth headset to test with either.
Thanks to everyone that's on xda-developers, this is my first post ever, but I've been coming to this site for stuff for years.
We haven't found anything yet
(There are a few older threads around)
Oops. Should have guessed that. Thank's for the quick reply.
Brendo said:
We haven't found anything yet
(There are a few older threads around)
Click to expand...
Click to collapse
ulnah said:
Oops. Should have guessed that. Thank's for the quick reply.
Click to expand...
Click to collapse
A possible work around would be to activate Google Now on your phone, then goto gear manager and have it send notifications from Google search. Once you get some notifications from Google search, you can click the "show on device" button and it should open, and you can say "OK Google" to start start it up. (make sure you turn that feature on in the Google search settings )
My solution, for anyone interested...
I use AutoVoice rather than Google Now, although any command not matched by AV I have sent to Google Now for processing.
Anyway, there doesn't appear to be any faster way of communicating than via whatever it is that the Gear and my Note II communicate over by default. Not having the know-how to tap into this, I figured I would just use an existing method of communicating and use Tasker to intercept and pretend that was a BT button press instead.
So what I've done is just set it up so that when I use the dialer to run a USSD code that doesn't mean anything on my phone, this triggers what would ordinarily happen on a BT button press on autovoice - that is:
When variable %WIN matches "USSD code running" -> Perform task with actions: "Input - back" and then Plugin - State - Autovoice - Recognise... or whatever it is you want.
So that last action could be replaced by launching Voice Search if you want Google Now to take control.
I've probably not explained that well at all, will post a proper how to if anyone's interested.
mattytj said:
I use AutoVoice rather than Google Now, although any command not matched by AV I have sent to Google Now for processing.
Anyway, there doesn't appear to be any faster way of communicating than via whatever it is that the Gear and my Note II communicate over by default. Not having the know-how to tap into this, I figured I would just use an existing method of communicating and use Tasker to intercept and pretend that was a BT button press instead.
So what I've done is just set it up so that when I use the dialer to run a USSD code that doesn't mean anything on my phone, this triggers what would ordinarily happen on a BT button press on autovoice - that is:
When variable %WIN matches "USSD code running" -> Perform task with actions: "Input - back" and then Plugin - State - Autovoice - Recognise... or whatever it is you want.
So that last action could be replaced by launching Voice Search if you want Google Now to take control.
I've probably not explained that well at all, will post a proper how to if anyone's interested.
Click to expand...
Click to collapse
This is actually quite a clever work around, great thinking.
I can't get this to work. For example, when I use '*#111222#' and press 'dial' on my phone, it tries, then errors (this is good I think)
But when I tried to dial the same number from my Gear, I get '*#111222# not supported. Are you on null rom on your Gear?
mattytj said:
I use AutoVoice rather than Google Now, although any command not matched by AV I have sent to Google Now for processing.
Anyway, there doesn't appear to be any faster way of communicating than via whatever it is that the Gear and my Note II communicate over by default. Not having the know-how to tap into this, I figured I would just use an existing method of communicating and use Tasker to intercept and pretend that was a BT button press instead.
So what I've done is just set it up so that when I use the dialer to run a USSD code that doesn't mean anything on my phone, this triggers what would ordinarily happen on a BT button press on autovoice - that is:
When variable %WIN matches "USSD code running" -> Perform task with actions: "Input - back" and then Plugin - State - Autovoice - Recognise... or whatever it is you want.
So that last action could be replaced by launching Voice Search if you want Google Now to take control.
I've probably not explained that well at all, will post a proper how to if anyone's interested.
Click to expand...
Click to collapse
fOmey said:
This is actually quite a clever work around, great thinking.
Click to expand...
Click to collapse
Thanks mate, big fan of your work with the Gear, would have returned this thing after a day without you ROM.
ulnah said:
I can't get this to work. For example, when I use '*#111222#' and press 'dial' on my phone, it tries, then errors (this is good I think)
But when I tried to dial the same number from my Gear, I get '*#111222# not supported. Are you on null rom on your Gear?
Click to expand...
Click to collapse
Are you using the phone app on your gear or the dialer app? I am running _null.
Using the dialer means it just passes the dialer digits to the phone to process, so I simply dial "2" on the dialer and call. 2 gets picked up as a ussd code, which does nothing, but tasker intercepts and Pepper, my phone, asks me what she can do for me. I speak into my watch and she proceeds to ignore or misunderstand me.
All operating as normal.
Sent from my GT-N7105 using xda app-developers app
For some reason I thought it would have to start with '*#' to be recognized as a code, but '2' works as well.
Some quick testing shows no problems, I hope it stays that way. I had tried something similar by dialing a dummy number, having it hang up, and then start, but that took a long time and was much worse than this. Thank you!
This works excellent !
Only thing I dont like is the "unknown code" error that pops up on my phone, once I figure how to terminate that, its perfect.
fOmey said:
This works excellent !
Only thing I dont like is the "unknown code" error that pops up on my phone, once I figure how to terminate that, its perfect.
Click to expand...
Click to collapse
If it's an OCD type issue like me, trigger an overlay scene on the error message that destroys itself when the error isn't in focus, something like "Initiating...." . So mine does that, with a back action also triggered by the error. My attribute trends to lag so the error it's thrown before recognition. If after, adapt the scene to say, executing... or something. Point is, it of sight, out of mind!
mattytj said:
If it's an OCD type issue like me, trigger an overlay scene on the error message that destroys itself when the error isn't in focus, something like "Initiating...." . So mine does that, with a back action also triggered by the error. My attribute trends to lag so the error it's thrown before recognition. If after, adapt the scene to say, executing... or something. Point is, it of sight, out of mind!
Click to expand...
Click to collapse
Iv attempted to simulate a back button press with tasker, works fine if the phone is unlocked.. although if the phone is locked, im presented with a "UNKNOWN CODE" error which frustrates me
Ill have to get back to the drawing board and figure it out..
EDIT: Shortly after writing the above I came up with this, works a treat:
Code:
Task: Phone Trigger (36)
A1: AutoVoice Recognize [ Configuration:
Voice command with headset Package:com.joaomgcd.autovoice Name:AutoVoice Recognize Timeout (Seconds):0 ]
A2: Wait [ MS:0 Seconds:4 Minutes:0 Hours:0 Days:0 ]
A3: Run Shell [ Command:input keyevent 4 Timeout (Seconds):0 Use Root:eek:n Store Output In: Store Errors In: Store Result In: ]
If I were to pair a Galaxy Gear with a M8, could I initiate a google search on my phone by pressing something on the watch? Basically I want to somewhat recreate the Moto x's ability to receive commands without the phone being on, and without an active tethering connection which will kill the battery on both devices.
Thanks so much!
my gear is setup to use google voice search/goggles/translate so i can voice command it, ask it to translate etc. caveat is i have to be BT tethered the whole time (battery life impact - i can get 18 hours on moderate use). I have the google now launcher installed and toggle between that and Nova depending on mood
works really good - when i take a photo with my gear, goggles automatically tries to identify it, i can tell my watch to launch apps and ask queries like you normally would with google voice
current limitation i have is i cannot use the hot phrase 'ok google' to initiate voice. i have to press the search i con to get started - still on the hunt on how to activate with 'ok google'
thevaristy said:
If I were to pair a Galaxy Gear with a M8, could I initiate a google search on my phone by pressing something on the watch? Basically I want to somewhat recreate the Moto x's ability to receive commands without the phone being on, and without an active tethering connection which will kill the battery on both devices.
Thanks so much!
Click to expand...
Click to collapse
animatechnica said:
current limitation i have is i cannot use the hot phrase 'ok google' to initiate voice. i have to press the search i con to get started - still on the hunt on how to activate with 'ok google'
Click to expand...
Click to collapse
Humm, "ok google" should work if Google Now has been triggered and running. Usually, you can do a followup search with "ok google".
Keeping the microphone running would be a MAJOR battery killer. Dont believe me, try it on your phone and see how quick it drains. On the Gear, tethered, you would probably get about 4 or 5 hours.
Just not quite there yet with battery technology, but hey who knows, maybe Google's Android Wear has cracked this nut and figured out a better way to initiate speech recognition.
ronfurro said:
Humm, "ok google" should work if Google Now has been triggered and running. Usually, you can do a followup search with "ok google".
Keeping the microphone running would be a MAJOR battery killer. Dont believe me, try it on your phone and see how quick it drains. On the Gear, tethered, you would probably get about 4 or 5 hours.
Just not quite there yet with battery technology, but hey who knows, maybe Google's Android Wear has cracked this nut and figured out a better way to initiate speech recognition.
Click to expand...
Click to collapse
Agreed. On the phone there is typically a setting to set the hot phrase for google now, this setting does not show up in the gear
Sent from my KFAPWI using Tapatalk
I'm using offline recognition on my gear in conjunction with Tasker and AutoVoice. I have it launch background continuous recognition whenever the screen is on, then so it when the screen turns off. I also have two orientations that turn the screen of right away without waiting for the timeout. (the positions when I naturally rest my arm on a desk and when my hand hangs at my side.) So far, this hasn't had much effect on battery life.
I use a key word of "Galaxy" to let it know I want to forward a commend to my phone, which I do with M2D Manager. I've already got quite a set of voice controls on the phone, so no need to replicate them on the watch. Otherwise, it handles the command on the watch.
The net effect here, is I lift my arm in standard watch fashion, and say commands directly. It's functionality similar to having always on recognition.
Sent from my XT1060 using XDA Premium 4 mobile app
hawkjm73 said:
I'm using offline recognition on my gear in conjunction with Tasker and AutoVoice. I have it launch background continuous recognition whenever the screen is on, then so it when the screen turns off. I also have two orientations that turn the screen of right away without waiting for the timeout. (the positions when I naturally rest my arm on a desk and when my hand hangs at my side.) So far, this hasn't had much effect on battery life.
I use a key word of "Galaxy" to let it know I want to forward a commend to my phone, which I do with M2D Manager. I've already got quite a set of voice controls on the phone, so no need to replicate them on the watch. Otherwise, it handles the command on the watch.
The net effect here, is I lift my arm in standard watch fashion, and say commands directly. It's functionality similar to having always on recognition.
Sent from my XT1060 using XDA Premium 4 mobile app
Click to expand...
Click to collapse
@hawkjm73 Could you elaborate on how you have this setup exactly? This may be my answer to ditch trying to get this damn "Google Now Search" to work offline in handling voice commands such as call, ect... :fingers-crossed:
hawkjm73 said:
I'm using offline recognition on my gear in conjunction with Tasker and AutoVoice. I have it launch background continuous recognition whenever the screen is on, then so it when the screen turns off. I also have two orientations that turn the screen of right away without waiting for the timeout. (the positions when I naturally rest my arm on a desk and when my hand hangs at my side.) So far, this hasn't had much effect on battery life.
I use a key word of "Galaxy" to let it know I want to forward a commend to my phone, which I do with M2D Manager. I've already got quite a set of voice controls on the phone, so no need to replicate them on the watch. Otherwise, it handles the command on the watch.
The net effect here, is I lift my arm in standard watch fashion, and say commands directly. It's functionality similar to having always on recognition.
Sent from my XT1060 using XDA Premium 4 mobile app
Click to expand...
Click to collapse
THIS! This is exactly what I am looking for. Screen on, send voice commands to the phone. How is this done?
thevaristy said:
THIS! This is exactly what I am looking for. Screen on, send voice commands to the phone. How is this done?
Click to expand...
Click to collapse
With a combination of Google Now Launcher and Offline voice recognition enabled...
https://www.youtube.com/watch?v=MY9OT1retpU&feature=youtu.be
Apologies for the delay in answering.
There are quite a few components working together here.
First and foremost: Null Rom. Without that, nothing else happens.
Second: offline voice recognition
This was pretty much taking all the language files from my phone and transplanting them to the watch, minding permissions.
Third: AutoVoice and Tasker
These are market apps and are fantastic for automation. You'll need them on both phone and watch. I'm using two profiles for this. The first turns on AutoVoice recognition in continuous mode whenever the screen turns on, and off when the screen goes off. The second profile is an AutoVoice recognize with "galaxy" as the command filter. It initiates an intent with the rest of what I say as a data payload.
Fourth: M2D manager
This is also available on the market, and needs to be on both devices. It is a Bluetooth bridge for Android intents. Tasker sends out an intent formed for M2D with the voice command as data. M2D transmitted it to the phone where it seems out a specified intent, still containing the command. Tasker listens for that intent. Once it had it, I use the AutoVoice test feature to send the command text in as if it had been spoken to the phone, so I can use all of my previously written voice control profiles. M2D also works the other way around, which I take advantage of for notifications and such.
Sent from my XT1060 using XDA Premium 4 mobile app
hawkjm73 said:
Apologies for the delay in answering.
There are quite a few components working together here.
First and foremost: Null Rom. Without that, nothing else happens.
Second: offline voice recognition
This was pretty much taking all the language files from my phone and transplanting them to the watch, minding permissions.
Third: AutoVoice and Tasker
These are market apps and are fantastic for automation. You'll need them on both phone and watch. I'm using two profiles for this. The first turns on AutoVoice recognition in continuous mode whenever the screen turns on, and off when the screen goes off. The second profile is an AutoVoice recognize with "galaxy" as the command filter. It initiates an intent with the rest of what I say as a data payload.
Fourth: M2D manager
This is also available on the market, and needs to be on both devices. It is a Bluetooth bridge for Android intents. Tasker sends out an intent formed for M2D with the voice command as data. M2D transmitted it to the phone where it seems out a specified intent, still containing the command. Tasker listens for that intent. Once it had it, I use the AutoVoice test feature to send the command text in as if it had been spoken to the phone, so I can use all of my previously written voice control profiles. M2D also works the other way around, which I take advantage of for notifications and such.
Sent from my XT1060 using XDA Premium 4 mobile app
Click to expand...
Click to collapse
Thanks! I can follow most of what you said, however, when it comes to tasker, I'm a bit challenged. Right now I do have Tasker on both watch & phone along with Taskgear. With the help of some others, I have a couple profiles set on each (visa-versa) to show the battery levels respectively per other device on a widget on each.
If you could kindly share your tasker setups, I think I could pull this off! :fingers-crossed:
Edited original post as now getting. a collection of tasker tasks i have going on and easier to keep them in the first post but Will help below if anybody wants to copy them, also if anyone else wants to post there's that will be good.
1st) have tasker to toggle settings like wifi brightness etc when i get to work. Reverts settings back when i get home. Produces notifications on watch.
2nd) sends a message to my wife when i say launch task 1 that tells her im on my way home.
3) launches a autowear location menu on the watch. In the menu i can select wether i want high accuracy or low and wether i want location services on or off atall.
4) when i charge my phone between 2230hrs and 0625hrs the watch is switched to screen off, Bluetooth off and phone set to vibrate. When its unplugged or after 0625 the watch is switched to always on and phone is muted.
5) mobile data on and off options via autowear.
6) detects when i have lost connection to the watch and immediately turns on GPS to record the location. Then notifies me on my phone with a map view. I fix generators for a living so i need to take my watch of sometimes or when i have a shower in the gym etc. If for whatever reason i forget it which is unlikely i will know where i last had it.
7) every morning when i take my phone and watch off charge it selects a different watchface every morning at random from a predefined list. (needs watchmaker to be installed)
8) using autowear so that when i start My Tracks on the watch it automatically starts my gps task for me.
9) little on screen widget that i can control my Xbox/sky/tv changing channels, volume, play and pause, go to Fifa 15, record that, turn everything off. Accomplished using autowear, autoremote, autoinput, and a redundant tablet.
I have only just began with tasker so my methods won't be the most efficient but they work. If anyone has better ways as i mainly using auto input then please get in touch.
Could you describe which apps you used to pull this off? Thank you
Yeah sure, was a app called Tasker that i used on my phone to create it all and then a tasker plugin called tasker wear i think.
I have since been tinkering with it.
The updated version now has the following options;
1) text the wife I'm on my way home.
2) turns Wi-Fi of, auto brightness on, sync off
3) opposite of 2.
4) closes and returns back to watch face.
With option 2 and 3 i get a notification of what has beebeen switched on/ off.
With option 1 it also activates option 3 ready for when i get home.
Also if i forget to do the above especially switch evereverything on it detects when I'm nearly home and does it for me.
check AutoWear tasker plugin from joao, you can do anything with it in combination with the other Autoapps.
TheKaser said:
check AutoWear tasker plugin from joao, you can do anything with it in combination with the other Autoapps.
Click to expand...
Click to collapse
I'll have a look at that thanks, currently going to create a task that switches between gps and battery saving location as you can't turn location of anymore. So will have it switch to gps when needed and back to battery saving when not.
Also going to see if i can switch to cinema mode when charging and lost connection to the phone together and to revert back to previous or new watch face when undocked and regains connection to phone.
Hi Phil, would love to hear how you get on with that.
Hi rusty,
Just finished putting something together.
I have tasker detect when it's on charge and if between 2230 and 0625, and if so it switches the watch screen to off, and Bluetooth of.
using a profile manger, that detects that the watch it's no longer connected so switches notifications on the phone back on.
The opposite is when it's unplugged it switches Bluetooth back on, turns the watch screen to always on and the profile manager then detects the watch so turns the phone to mute but watch still vibrates.
I can now sleep at night with no lights keeping me awake and the only thing i have to do is plug my phone in.
Just to note it was pretty simple to set up with tasker and autoinput.
TheKaser said:
check AutoWear tasker plugin from joao, you can do anything with it in combination with the other Autoapps.
Click to expand...
Click to collapse
Hi kaser, as you recommended autowear do you have any experience with it as i can't get my head round it and there isn't much on YouTube or Google to help out.
phil gpx said:
Hi kaser, as you recommended autowear do you have any experience with it as i can't get my head round it and there isn't much on YouTube or Google to help out.
Click to expand...
Click to collapse
Hi, sorry for the late reply, I was on holidays.
I do have some experience. You need to download AutoApps from Google Play and suscribe to the alpha testing. Then you can download AutoWear. For this you also need to joing the Google+ community where the developer is always nice and reactive when posting bugs, questions or guidelines. Posting at the community is the best thing you can do since AutoWear is still in Alpha and it improves every day and it is not too straight forward to understand, especially how it communicates with AutoVoice and how commands and parameters are passed to Tasker.
By the way, I have a similar profile telling my girlfriend when I get to work, when I leave work and when I have parked at home coming from work, which automatically trigger depending on my location and connection to my car's bluetooth or work's WiFi. You don't really need your watch for that
Cheers for replying.
I have already downloaded it and played around but can't get my head around the processes of the app.
I followed the guide for the voice screen but cant get new screens up or start tasks etc.
I'm usually pretty good at working things out but cant with autowear.
what is it you are trying to do? Do you have the latest version of tasker?
Yeah i have the updated tasker but just can't work out how to get from the voice screen too say a "4 menu screen" that has tasks i have set up. Not sure what i want to accomplish with tasker next but i know if i can understand autowear it would help.
Have you done anything with autowear.
I have done the above tasks in the first post with autoinput and taskerwear but that is just a series of notifications that you can select to start tasks.
my main use for autowear is to send whatsapp messages from scratch (whatsapp currently only allows to reply to messages).
To achieve this I open an AutoWear Voice Screen by shaking the watch, and say "write to XXXX and say YYYY". This triggers an AutoVoice profile configured to react to "write to (?<contact>.+) and say (?<message>.+) (using Regex)
Inside the task, I do an WhatsTasker plugin contact search of the variable %contact generated by the regex command. Once found, I text the %message to this contact using whatstasker.
The best thing to do is to read the variable descriptions in the task. For example, you create your profile of AutoVoice Recognized. Inside it, before clicking configuration, you can see all the local variables available and their description. These are usable within the task. Same thing goes for any autowear task, they are usable after calling the task.
For example, in my task, after finding the contact, I have an AutoWear confirmation screen with the picture of my contact, his name, and the message I dictated. And following that I have an if %awmessage (the output of this confirmation screen) is different to cancel, then send the message.
Hope this helps a bit!
Thanks buddy, i will try and emulate this tonight once the kids are in bed. Whatsapp was one of the things i have been thinking of doing, just wish there was a way of bringing up a previous conversation without waiting for a new message.
well maybe you can capture the messages as they come, along with the sender's name, and save them in a temp file. Then create an autowear screen that shows the content of this file when requested... but I guess it would be very difficult to make it work perfectly (especially with groups).
I have managed to create and implement a mobile data on and off switch into my interactive notification popup for system settings using tasker, taskerwear and autoinput.
I now have the ability to switch wifi, mobile data, gps on and off as well as high and low accuracy location services and change profiles from one notifaction popping up.
left watch behind
Currently trying to set up a tasker profile that when i get disconnected from my watch will activate gps, stores the location, notifies you where it is and shows you on the map.
Hopefully will never need it but you never know. I.e you leave it in the gym, at work etc
Is there already something that does something similar as struggling at the moment but will crack on if there isn't.
Any one let me know about how to get nearby cell tower ID in a single variable.
Otherwise, how to convert the context in profile as a task.
Ty
Sent from my Micromax A58 using XDA Free mobile app
I'm sorrlearning tasker but i think if you create a new profile that activates when you connect to that cell tower. Then create a task and set variable %whateveryouwantto to %CellID.
You should only have to do that profile once and the variable will be set and you can use it in your profiles and tasks.
On another note i got my lost watch profile working like a charm.
Great Job
TheKaser said:
my main use for autowear is to send whatsapp messages from scratch (whatsapp currently only allows to reply to messages).
To achieve this I open an AutoWear Voice Screen by shaking the watch, and say "write to XXXX and say YYYY". This triggers an AutoVoice profile configured to react to "write to (?<contact>.+) and say (?<message>.+) (using Regex)
Inside the task, I do an WhatsTasker plugin contact search of the variable %contact generated by the regex command. Once found, I text the %message to this contact using whatstasker.
The best thing to do is to read the variable descriptions in the task. For example, you create your profile of AutoVoice Recognized. Inside it, before clicking configuration, you can see all the local variables available and their description. These are usable within the task. Same thing goes for any autowear task, they are usable after calling the task.
For example, in my task, after finding the contact, I have an AutoWear confirmation screen with the picture of my contact, his name, and the message I dictated. And following that I have an if %awmessage (the output of this confirmation screen) is different to cancel, then send the message.
Hope this helps a bit!
Click to expand...
Click to collapse
It sounds great :good:
May you please share your work with screenshots or something else so I can make it work for me?
Thanks in advance and best regards
Hi everyone!
After my first plugin I had an idea of creating another one, but this time not an "action" but an "event".
The free but ad-supported version of the plugin can be found here and if you want to support the development and don't have ads you can find the paid one here.
Of course the description can be found there, but as a quick recap, this plugin can listen for hotwords and signal Tasker when a hotword is recognized!
It uses Snowboy Hotword to listen to the mic and process what it hears super fast and completely locally.
Hotword models can be found and trained at the Snowboy Website and the downloaded model can be imported in my plugin.
Next from Tasker you can create an event of the Hotword Plugin and tap on the hotword you want to react to, next you can do with it what you want!
You can listen to multiple hotwords at the same time and run a different event for each one. So for example if you shout "lights on" the lights might turn on and if you shout "play music" the music starts playing.
You can see it as AutoVoice Continuous but a lot faster and more consistent, plus you can train any hotword you like and pronounce it in any language you prefer (to be defined at the website).
Of course you can modify the sensitivity if you feel like it can't hear you or if it goes off all the time and you can start or stop the service via a Tasker action if you like.
Personally I use the app in combination with my ADB shell plugin to launch assistant on my NVIDIA Shield AndroidTV handsfree by shouting "hey google" to an old android phone lying besides the tv.
Since a lot of time went into developing this plugin and making it work (a way lot more than my last plugin) I put it in the Play Store for the price of a small coffee but since I know people like free apps, I decided to also create an add supported version.
Again, you can find the free version here and the ad-free version here (Of course I will update both at about the same time when I fix something).
So if you like the idea and want to try it out please have a look!
And if you have any questions, troubles, ideas, bugs (yes I'm sure they're there even after thoroughly testing), please leave them below!
Edit: If you don't have access to the Google Play store, I just uploaded both versions to XDA Labs:
Paid version
Free version
If you don't have access to the Google Play store, I just uploaded both versions to XDA Labs:
Paid version
Free version
I don't think it is your fault, but it seems that this hot word detection system is way too sensitive for me, it detected words when there were only noises.
sadly, but the idea was promising though
alienyd said:
I don't think it is your fault, but it seems that this hot word detection system is way too sensitive for me, it detected words when there were only noises.
sadly, but the idea was promising though
Click to expand...
Click to collapse
That happens when the sensitivity is too high, have you adjusted the sensitivity setting in the app?
For me the same happens when I set the sensitivity to 10 and when I set it to 0, nothing will ever trigger. For the phone and words I'm using, the sweet spot is around 4 or 5
yeah, thanks for the reply. I did play around with the sensitivity on default and then some lower level, however it still seemed to trigger to often. Maybe a little bit more playing around would help, or, may be it's just the nature of my language...
Great app, does the work pretty good but why does it have to disable google assistant. When this app is listening, assistant stops listening. Why can't they both work together. Or can we configure this app to trigger assistant and not the Google app itself.
@Humpie does it work when the screen is off ?
Comparison to Autovoice
Hi,
Just wondering about the difference between this and Autovoice, or if there is an integration that would make sense.
Thanks.
I was hopeful, but even at the lowest sensitivity setting available, it still triggers when the tv is on downstairs and completely quite upstairs.:crying::crying:
ngreen1980 said:
I was hopeful, but even at the lowest sensitivity setting available, it still triggers when the tv is on downstairs and completely quite upstairs.:crying::crying:
Click to expand...
Click to collapse
kind of my problem too...
alienyd said:
yeah, thanks for the reply. I did play around with the sensitivity on default and then some lower level, however it still seemed to trigger to often. Maybe a little bit more playing around would help, or, may be it's just the nature of my language...
Click to expand...
Click to collapse
ngreen1980 said:
I was hopeful, but even at the lowest sensitivity setting available, it still triggers when the tv is on downstairs and completely quite upstairs.:crying::crying:
Click to expand...
Click to collapse
alienyd said:
kind of my problem too...
Click to expand...
Click to collapse
Hmm, what you could try is to download a hotword from snowboy.kitt.ai that is a bit more trained. In my experience they are a lot more consistent and trigger more accurately.
I do find it strange that even the lowest sensitivity setting still triggers it Did you stop and start the service just to be sure after changing it?
I am thinking about enabling more steps of sensitivity btw, but I'm not sure if I can make it even lower. I'll try though
scissorscrush said:
Great app, does the work pretty good but why does it have to disable google assistant. When this app is listening, assistant stops listening. Why can't they both work together. Or can we configure this app to trigger assistant and not the Google app itself.
Click to expand...
Click to collapse
Unfortunately this is how the audio record function works in Android. Only one app at a time can access the microphone. Google has made an exception for the built-in "Okay Google" (and I'm not sure how they do it), but fortunately a workaround is possible. You can enable and disable the listening service of my plugin from within Tasker.
So what you can do is create a new profile for when the event "Hey google" (for instance) in hotword Plugin triggers and create a task wherein you put "stop hotword plugin", "Voice command" (which triggers assistant) and after a while "start hotword plugin" again. (Or you can just enable it yourself from the notification)
It's also possible to stop and start hotword plugin automatically when assistant is in the foreground (I think), but I haven't managed to get it to work myself as Assistant is an overlay app and the detection of which app is running can be a bit slow in Tasker...
madkiran said:
@Humpie does it work when the screen is off ?
Click to expand...
Click to collapse
Certainly, yes!
PhilipTD said:
Hi,
Just wondering about the difference between this and Autovoice, or if there is an integration that would make sense.
Thanks.
Click to expand...
Click to collapse
My plugin is comparable to the continuous listening mode of AutoVoice, however there is a difference. AutoVoice can continuously listen to what everyone says, convert it to text and then pass that on to Tasker where you can make something happen when a certain word is heard. I did try this before creating this app, but it can be very slow, especially when more words are heard after the hotword you want. AutoVoice will listen until it hears that you stopped talking and then send all it heard to Tasker. It will also often just not get the word correct, so it's not handy to be used as hotword detection.
In comparison, my plugin uses a different engine (not the google speech recognition engine, but Snowboy) which is specifically designed to recognize trained hotwords. So the downside is that you have to train a certain hotword before you can use it, but this does result in a way more accurate detection. It's also way faster as it doesn't have to wait til everyone is silent again and stopped talking, it just instantly triggers after you say the word and signals Tasker.
You can integrate this with AutoVoice if you like, well, more like, let it work together to create something awesome. You can create a task for when a certain hotword is triggered where you stop my plugin from listening, start an AutoVoice prompt where you can say your command (like "set the tv to 10") and then afterwards start my plugin again.
Want to give this a try.
Continuous listening usually leads to too much battery drain.
Is the problem handled in the plugin? Tasker monitors sensors and hardware by intervals which is why it in itself does not eat the battery. Using Tasker's scheme would have (optionally) long waits to test audio when the screen is off.
Where is this "voice command" option.
Could you give us an example of your Tasker settings that allow you to use this instead of gAssistant?
I paid, cause I love this idea. Can't wait to try it, but I wholeheartedly support anything better than "OK Google"!
Dovidhalevi said:
Want to give this a try.
Continuous listening usually leads to too much battery drain.
Is the problem handled in the plugin? Tasker monitors sensors and hardware by intervals which is why it in itself does not eat the battery. Using Tasker's scheme would have (optionally) long waits to test audio when the screen is off.
Click to expand...
Click to collapse
It will lead to battery drain, but this is necessary. It would be extremely unreliable if it stopped listening for a while, as in, it would stop working and no longer react to your hotword which defies the entire purpose of the app.
phishfi said:
Where is this "voice command" option.
Could you give us an example of your Tasker settings that allow you to use this instead of gAssistant?
I paid, cause I love this idea. Can't wait to try it, but I wholeheartedly support anything better than "OK Google"!
Click to expand...
Click to collapse
"Voice command" can be found under the Tasker actions under the Input tab. Thank you very much
I attached an example of how you can launch assistant whilst pausing my plugin from listening for a while so assistant can actually hear you.
You can execute this task for the hotword event you like so "hey google" or "computer" whatever you like and have set up in the hotword plugin
Humpie said:
It will lead to battery drain, but this is necessary. It would be extremely unreliable if it stopped listening for a while, as in, it would stop working and no longer react to your hotword which defies the entire purpose of the app.
Click to expand...
Click to collapse
So the question becomes, with any of these things, how I want to use them. Use a Tasker script to toggle the service. For example, if phone is face down, turn it off. Or if I pick up the phone, turn it on. Scripts have "exit" options to reverse the toggle.
I would want to do this with OK Google as well but this plugin offers the option, Google does not.
Dovidhalevi said:
So the question becomes, with any of these things, how I want to use them. Use a Tasker script to toggle the service. For example, if phone is face down, turn it off. Or if I pick up the phone, turn it on. Scripts have "exit" options to reverse the toggle.
I would want to do this with OK Google as well but this plugin offers the option, Google does not.
Click to expand...
Click to collapse
This is indeed possible to achieve with my plugin, however if your phone supports always listening OK Google with screen off already, it usually has a dedicated chip for this that uses very little power. I know my 6P does and leaving OK Google to always listening results in no significant more battery drain
Hello. I have two android phones. I use one of them as an actual smartphone, while the other is always docked and I use it as alarm clock and as a "hub" for my Google Home devices. When I say "Ok Google", both phones wake, listen to my commands and answer. This is annoying. I would like only one of the two phones (preferably the docked one) answering to my command. I know that this kind of integration is possible between a Google Home and a phone, but is it possible between two phones as well? Maybe with some workaround like... Tasker, IFTTT, magisk modules or...?
How is this done between home and a phone? Both my mini and my phone respond when I say "ok google" and I wasn't able to deactivate it on my phone when Im at home. Thanks.
I don't know how it is done, but it is done: https://support.google.com/googlenest/answer/7257763?co=GENIE.Platform=Android&hl=en