Storing onTouchEvent coordinates in Tasker - Tasker Tips & Tricks

Is there a way to detect and store the coordinates of a touch event, using Android Tasker? Preferably using Scenes, on a non-rooted phone.

Related

Initiate Google Search on the phone

If I were to pair a Galaxy Gear with a M8, could I initiate a google search on my phone by pressing something on the watch? Basically I want to somewhat recreate the Moto x's ability to receive commands without the phone being on, and without an active tethering connection which will kill the battery on both devices.
Thanks so much!
my gear is setup to use google voice search/goggles/translate so i can voice command it, ask it to translate etc. caveat is i have to be BT tethered the whole time (battery life impact - i can get 18 hours on moderate use). I have the google now launcher installed and toggle between that and Nova depending on mood
works really good - when i take a photo with my gear, goggles automatically tries to identify it, i can tell my watch to launch apps and ask queries like you normally would with google voice
current limitation i have is i cannot use the hot phrase 'ok google' to initiate voice. i have to press the search i con to get started - still on the hunt on how to activate with 'ok google'
thevaristy said:
If I were to pair a Galaxy Gear with a M8, could I initiate a google search on my phone by pressing something on the watch? Basically I want to somewhat recreate the Moto x's ability to receive commands without the phone being on, and without an active tethering connection which will kill the battery on both devices.
Thanks so much!
Click to expand...
Click to collapse
animatechnica said:
current limitation i have is i cannot use the hot phrase 'ok google' to initiate voice. i have to press the search i con to get started - still on the hunt on how to activate with 'ok google'
Click to expand...
Click to collapse
Humm, "ok google" should work if Google Now has been triggered and running. Usually, you can do a followup search with "ok google".
Keeping the microphone running would be a MAJOR battery killer. Dont believe me, try it on your phone and see how quick it drains. On the Gear, tethered, you would probably get about 4 or 5 hours.
Just not quite there yet with battery technology, but hey who knows, maybe Google's Android Wear has cracked this nut and figured out a better way to initiate speech recognition.
ronfurro said:
Humm, "ok google" should work if Google Now has been triggered and running. Usually, you can do a followup search with "ok google".
Keeping the microphone running would be a MAJOR battery killer. Dont believe me, try it on your phone and see how quick it drains. On the Gear, tethered, you would probably get about 4 or 5 hours.
Just not quite there yet with battery technology, but hey who knows, maybe Google's Android Wear has cracked this nut and figured out a better way to initiate speech recognition.
Click to expand...
Click to collapse
Agreed. On the phone there is typically a setting to set the hot phrase for google now, this setting does not show up in the gear
Sent from my KFAPWI using Tapatalk
I'm using offline recognition on my gear in conjunction with Tasker and AutoVoice. I have it launch background continuous recognition whenever the screen is on, then so it when the screen turns off. I also have two orientations that turn the screen of right away without waiting for the timeout. (the positions when I naturally rest my arm on a desk and when my hand hangs at my side.) So far, this hasn't had much effect on battery life.
I use a key word of "Galaxy" to let it know I want to forward a commend to my phone, which I do with M2D Manager. I've already got quite a set of voice controls on the phone, so no need to replicate them on the watch. Otherwise, it handles the command on the watch.
The net effect here, is I lift my arm in standard watch fashion, and say commands directly. It's functionality similar to having always on recognition.
Sent from my XT1060 using XDA Premium 4 mobile app
hawkjm73 said:
I'm using offline recognition on my gear in conjunction with Tasker and AutoVoice. I have it launch background continuous recognition whenever the screen is on, then so it when the screen turns off. I also have two orientations that turn the screen of right away without waiting for the timeout. (the positions when I naturally rest my arm on a desk and when my hand hangs at my side.) So far, this hasn't had much effect on battery life.
I use a key word of "Galaxy" to let it know I want to forward a commend to my phone, which I do with M2D Manager. I've already got quite a set of voice controls on the phone, so no need to replicate them on the watch. Otherwise, it handles the command on the watch.
The net effect here, is I lift my arm in standard watch fashion, and say commands directly. It's functionality similar to having always on recognition.
Sent from my XT1060 using XDA Premium 4 mobile app
Click to expand...
Click to collapse
@hawkjm73 Could you elaborate on how you have this setup exactly? This may be my answer to ditch trying to get this damn "Google Now Search" to work offline in handling voice commands such as call, ect... :fingers-crossed:
hawkjm73 said:
I'm using offline recognition on my gear in conjunction with Tasker and AutoVoice. I have it launch background continuous recognition whenever the screen is on, then so it when the screen turns off. I also have two orientations that turn the screen of right away without waiting for the timeout. (the positions when I naturally rest my arm on a desk and when my hand hangs at my side.) So far, this hasn't had much effect on battery life.
I use a key word of "Galaxy" to let it know I want to forward a commend to my phone, which I do with M2D Manager. I've already got quite a set of voice controls on the phone, so no need to replicate them on the watch. Otherwise, it handles the command on the watch.
The net effect here, is I lift my arm in standard watch fashion, and say commands directly. It's functionality similar to having always on recognition.
Sent from my XT1060 using XDA Premium 4 mobile app
Click to expand...
Click to collapse
THIS! This is exactly what I am looking for. Screen on, send voice commands to the phone. How is this done?
thevaristy said:
THIS! This is exactly what I am looking for. Screen on, send voice commands to the phone. How is this done?
Click to expand...
Click to collapse
With a combination of Google Now Launcher and Offline voice recognition enabled...
https://www.youtube.com/watch?v=MY9OT1retpU&feature=youtu.be
Apologies for the delay in answering.
There are quite a few components working together here.
First and foremost: Null Rom. Without that, nothing else happens.
Second: offline voice recognition
This was pretty much taking all the language files from my phone and transplanting them to the watch, minding permissions.
Third: AutoVoice and Tasker
These are market apps and are fantastic for automation. You'll need them on both phone and watch. I'm using two profiles for this. The first turns on AutoVoice recognition in continuous mode whenever the screen turns on, and off when the screen goes off. The second profile is an AutoVoice recognize with "galaxy" as the command filter. It initiates an intent with the rest of what I say as a data payload.
Fourth: M2D manager
This is also available on the market, and needs to be on both devices. It is a Bluetooth bridge for Android intents. Tasker sends out an intent formed for M2D with the voice command as data. M2D transmitted it to the phone where it seems out a specified intent, still containing the command. Tasker listens for that intent. Once it had it, I use the AutoVoice test feature to send the command text in as if it had been spoken to the phone, so I can use all of my previously written voice control profiles. M2D also works the other way around, which I take advantage of for notifications and such.
Sent from my XT1060 using XDA Premium 4 mobile app
hawkjm73 said:
Apologies for the delay in answering.
There are quite a few components working together here.
First and foremost: Null Rom. Without that, nothing else happens.
Second: offline voice recognition
This was pretty much taking all the language files from my phone and transplanting them to the watch, minding permissions.
Third: AutoVoice and Tasker
These are market apps and are fantastic for automation. You'll need them on both phone and watch. I'm using two profiles for this. The first turns on AutoVoice recognition in continuous mode whenever the screen turns on, and off when the screen goes off. The second profile is an AutoVoice recognize with "galaxy" as the command filter. It initiates an intent with the rest of what I say as a data payload.
Fourth: M2D manager
This is also available on the market, and needs to be on both devices. It is a Bluetooth bridge for Android intents. Tasker sends out an intent formed for M2D with the voice command as data. M2D transmitted it to the phone where it seems out a specified intent, still containing the command. Tasker listens for that intent. Once it had it, I use the AutoVoice test feature to send the command text in as if it had been spoken to the phone, so I can use all of my previously written voice control profiles. M2D also works the other way around, which I take advantage of for notifications and such.
Sent from my XT1060 using XDA Premium 4 mobile app
Click to expand...
Click to collapse
Thanks! I can follow most of what you said, however, when it comes to tasker, I'm a bit challenged. Right now I do have Tasker on both watch & phone along with Taskgear. With the help of some others, I have a couple profiles set on each (visa-versa) to show the battery levels respectively per other device on a widget on each.
If you could kindly share your tasker setups, I think I could pull this off! :fingers-crossed:

[Q] GPS coordinates are used by one app only

Hello, guys.
I am experiencing one problem with my HTC 8X with WP8. It seems that my GPS coordinates can only be used by one app. When I try to use 2 apps for my track logging - (like CycleMaster etc.), only one can create gpx file with the GPS data, another app returns the empty track data. When I use one tracking app, it works well but while I am taking photos on my route, the camera app cannot fill GPS coordinates in a file/picture attributes.
Is it a problem of WP8 or HTC 8X? Is there any way to bypass the issue?
Many thanks in advance!
The inability to run two background GPS apps at once (like a track logger + Here Drive) is a platform limitation to keep battery life sane. The thing of not being able to get geotagging in your photos while using a tracking app is very weird though. Either HTC-specific or possibly just an issue with your phone would be my guess...?

[Q] Automation on Windows Phone (tasker)

Hi.
Is there a WP app who can automate some annoying actions like in tasker on Android?
Do we have some API to do an app like this?
Thanks
Unfortunately, no.
Geosetr from store, it start or stop wifi and/or bluetooth when you get home. Or at work. Is based on location.

Tasker as an app with subtasks / multiple notifications

Hi,
I am new to Tasker. I am using v5.1 on Google Pixel 2 running Android 8.1.
I couldn’t find an answer to my questions:
1. I have a parent task that checks that a sub task is started or not. If it’s not, then it starts it. Is it possible to export my parent task as an app using the app factory such that when I install it on another device, the app contains the sub task?
2. I have a simple app that generates a test notification. While the notification is visible, if I run the task again, it overrides the previous notification. There’s no race condition as both the notifications are triggered seconds apart. Is it possible to combine the contents of the two notifications or have them separate with Tasker alone (without using Tasker AutoNotification)?
Regards
AK

Help with Simple OCR app using Firebase ML Kit

Hi everyone,
I am new to Android development but I have been learning on the way as I create my first OCR app using Firebase in Java. I essentially followed a youtube video to create the app but I had the following problems that I needed help with:
1) If I take the picture in landscape, the app can detect the text. However, when I take the picture in portrait, the captured image is rotated 90 degrees and the app cannot detect the text in the image. Whats the simplest way for me to resolve this?
2) Currently I take the picture with the phone's camera and this image is displayed in the app. I click my detect text button and the text appears. But I would like to see some bounding boxes on the images that shows what Firebase ML kit is seeing.
3) Also when I take a simple screenshot of a smartphone pin screen, the app can detect most of the numbers, but it always seems to miss one. I assume this is because I am using the local on phone version of Firebase ML kit, but is it possible to make it more accurate without running on cloud. I am currently using:
implementation 'com.google.firebase:firebase-core:15.0.2'
implementation 'com.google.firebase:firebase-ml-vision:16.0.0'
Thanks

Categories

Resources