This section describes the ways to get a log file. The messages in a log file will depend on which debugging options have been chosen. A user can get a brief explanation by starting slimserver from a command line - this list is repeated later in this section.
There is a slight difference in the information logged depending on the output device. The messages output to the file and browser window will only be the log messages enabled from Slimserver. Whereas sending the log output to a command line window is useful because not only are the the log messages displayed but they will be interleaved with messages from supporting applications such as Mplayer, socketwrapper or invoked scripts which would not be displayed in a browser window or copied to a logfile.
The following assume the installation of squeezecenter in the default location - this may vary depending on the version of Linux e. Debian, Gentoo. To ensure the window is up to date, press the refresh button. The default filename and location is OS dependent. From a command line or the script that starts squeezecenter , the command line option -logfile allow the user to specify the filename of the log file.
For many of the normal user problems which involve plugins, or the inability to play a file, playlist, or stream, the following options are ones that could be tried first to see if they shed any light on the problem. Jump to: navigation , search.
Categories : Squeezebox Server Troubleshooting. Views Page Discussion View source History. Personal tools That's just how LMS works. The more information Alexa has, the better that matching works. If you ask to play 'Abacab' and it's wrongly spelled as 'Abakab' in your tags, by knowing the artist is Genesis the skill can fuzzy-match through all your Genesis albums and still find it.
Without that extra clue, there would never be a match. During its first run and also whenever you use the Discover command , the skill retrieves the names of your players from your LMS server s. This biases Alexa towards an understanding of a room name when what you said was not clear enough for outright comprehension. However, this is just a guideline and basically any player name should work.
The skill uses fuzzy matching to map what Alexa thinks she heard as a player name against your known players.
If no match is found, you'll be asked to repeat the player name again or to perform a fresh discovery maybe you added a new player and forgot to tell Alexa. Your discovered players are also declared to Alexa as dynamic entities against which to match. It's always best to assume unwieldy player names so you don't have to mention them each time. Do yourself a favor though and if you notice that Alexa keeps asking you to repeat the name of a certain player, just rename it for simplicity's sake.
Note that you can freely use the word player in your utterances if you like — you can also mix the use of "in the", "on" and "on the". Using the Rename command it's possible to rename players to something other than the name reported by LMS. This only affects the name used in the skill, not the name known to the server or e. Re-doing player discovery will offer to revert any renamed players back to their LMS original names.
The skill recognizes Group Players if you have that plugin installed and will mention during discovery how many players it found in that category. Note that the name everything is reserved by the skill to assemble synchronization groups with all your players — it's therefore better not to have your own Group Player called "everything". Finally, any player names ending in an asterisk e.
MediaServer is a so-called custom skill , meaning it has to be invoked using the skill name in your utterance. Given that Alexa herself can play music sourced from e. Spotify or Amazon Music directly on an Echo's speaker, you omit the name at your peril! You can also tell e. Subsequently, when you issue a command in the open area between your Livingroom and Kitchen Echos, you don't have to worry which Echo actually hears you and assumes the player to target.
The skill will check which of the two players is actually playing a song and react accordingly. If both players happen to be playing, normal association applies. There are two basic ways to invoke any custom skill: one-shot and session. With a so-called one-shot , the skill handles a single intent and exits upon completion. Alexa will tell you the title of the first track that started playing and the skill exits. Great for issuing a single command with no fuss.
A session is started by issuing a so-called LaunchRequest to fire up the skill in an interactive mode. Alexa will now repeatedly ask for intents by answering you with: "… say a MediaServer command". You could request "Help" or "What's playing?
Whatever the intent may be, when it completes, Alexa will continually ask a variant of "… what else? This means you can issue as many back-to-back commands as you like until you terminate the session by saying one of no, stop, cancel, shut-up, never mind, forget it , or that's it.
You can also simply ignore her until she beeps and the blue ring goes off. An example volley might be:. What else? Increase the volume by 20 Volume is now Anything else? Disable shuffle by song OK. Anything further? Goto track 5 Now playing track 5 of 9 total.
It's called "Another Record", by Genesis. No Goodbye. Note that you can interrupt Alexa's response during a session e. This will break in on her current speech but keep the session open for a new MediaServer command unless you said "Cancel" or "Exit".
Important: Whenever APL is being displayed on the screen of an Echo Show, the session is subsequently held open — even if you actually issued a one-shot command. However, unlike a deliberately-launched session following a LaunchRequest , the microphone is closed there's no blue bar at the bottom of your screen. To issue a follow-up command while APL is still showing on your screen, you should omit the invocation name and just start the command with "Alexa, …" to open the microphone instead of "Alexa, tell MediaServer to…".
The hints showing at the bottom of a now-playing screen reflect this syntax and serve to jog your memory. Observe that 'Stream' intents do not prolong the open session as it does not make sense to continually interrupt music playback on the Echo with a volley of commands.
The stream commands in the skill are made possible by the inclusion of the audioplayer interface which natively supports a number of built-in intents. In other words, you can just say "Alexa, pause" instead of having to say "Alexa, tell MediaServer to pause". Quite convenient. However, these intents are standalone by nature so specifying an attendant player is unsupported.
That is not always convenient. In order to use stream commands, a user has to be subscribed. The skill checks this and reserves the built-ins to address controlling the Echo you are speaking to. Rather than 'waste' the niceties of the built-ins for unsubscribed users, they are 'hijacked' to address the assumed player instead because there's no need to specify it.
However, it can bring a surprise for a non-subscribed user who discovers after subscribing that a side-effect of the ability to stream is a modified behaviour of the built-in functions as previously experienced. In addition to the native audioplayer commands, the skill's own voice model maintains a shadow set of very similarly-worded commands that do allow a player name to be included as part of the intent see Reference A—Z for syntax. These can be used by subscribed users to address any player in their setup or by non-subscribers to address anything other than the assumed player.
These commands always require that you use the skill's invocation name in the command, i. Remember, these directives will always work as expected and are not context-dependent. Some confusion can occur if you have both MediaServer and the LMS-lite skill installed, as Smart Home skills always omit an invocation name but do allow a target device name to be specified. The playbackcontroller interface in the Smart Home paradigm gives us many of the exact same commands that audioplayer gives us in a Custom skill like MediaServer.
Smart Home skills also have the ability to persist a device , meaning if you first say "Alexa, next track on the 'Kitchen' Player", you can subsequently just say "Alexa, next track" to continually target that player when using LMS-lite that is, until you deliberately target a different device.
In all 3 cases there is no skill invocation name and no device specified. The answer will depend on the recent history of commands issued to the Echo in question, and sometimes what Alexa does may not be what you expected — particularly if you are unaware of this history because a different user in your household was the last to issue a command.
Context is everything! Just so you know …. If you'd prefer bare-bones setup instructions without explanation, see here. Because LMS is not accessible from outside your LAN, we need to install a so-called proxy to enable secure access to your server from the Amazon cloud. Since initializing the tunnel is an outward process originating in your LAN, there's no need to open any ports in your router — ngrok also takes care of a valid certificate for SSL.
The skill communicates with your proxy and it is the proxy perched on the LAN side of your firewall on your local network at home that actually interacts with LMS. With this approach, cloud control is password-protected while local control within your LAN remains completely unrestricted. You should therefore not set a password in the LMS settings when using our skills.
There are actually two components to ngrok — a cloud service and an executable running on one of your computers. The ngrok cloud service provides the internet-facing URL that Alexa or a browser 'sees'. Incoming requests to this service are sent down the secure tunnel to your local executable which is always 'listening'. It 'knows' to relay all incoming requests onwards to your LMS server because it's configured accordingly.
Responses from LMS simply follow the reverse path back up to the internet-facing ngrok cloud service where Alexa or the browser receives them. It's all lightning fast and adds no perceivable delay to your enjoyment of the skill s.
If your LMS is reachable via https due to an existing reverse-proxy running via e. Visit the ngrok. The free plan [1] works just fine for our purposes so go ahead and Sign Up. Our installer will automatically download ngrok for you and configure it for LMS. Visit our secure [1] configurator landing page to download your personalized easy-install script. You can do this on any platform and it does not have to be the machine LMS or ngrok runs on.
When all the entries are completed, press the large yellow Download Script button and the script file will be saved by your browser. Your browser may warn that files sourced from the internet are potentially harmful — it does this based on the. Some browsers may even change the extension of the saved file to. If that happens you will need to manually rename with the correct extension before running it. The resulting script is built in RAM on our webserver — it is never saved to our filesystem.
Note, however, that the password you choose for your tunnel will be stored in plain-text in the. It's probably wise not to re-use a password you use elsewhere.
Windows 10 and 11 are supported as-is — earlier versions may work if your system's PowerShell is version 5. Then perform account-linking as explained in the bottom paragraph. Also verify that LAME is installed on your pi if you intend to use any of the stream commands.
Installs ngrok as a persistent tcz package and adds it to onboot. Debian Linuxes Applies to any Debian-based distro which supports systemd services and bash scripting.
MacOS Place the downloaded setup. A shortcut to that. Make sure to keep a safe copy of the installer script so that you can re-install ngrok with the same uuid should the need arise. That way, there will be no need to re-link the skill s. If you run ngrok with the same authtoken in a different remote LAN, you must select a different region code for the second instance or it won't work.
If you need multiple tunnels in the same LAN, up to 4 can all be run from the same proxying-machine by adding extra tunnel entries to the. Do it this way rather than trying to run multiple instances of ngrok itself. The legacy help has an example. Alternatively, close the browser and then re-open it. The links should then be active. Because your uuid is stored in the browser's localStorage and that is always sub domain-specific, you may be told that you have not yet set things up if you alternate between help-file source locations on our server.
If this happens, switch to the help subdomain you originally used and your localStorage should still be there.
0コメント