Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

0.3.6 Developer Log Truncation regardless of settings #300

Open
jdnlp opened this issue Jan 14, 2025 · 8 comments
Open

0.3.6 Developer Log Truncation regardless of settings #300

jdnlp opened this issue Jan 14, 2025 · 8 comments
Labels
enhancement New feature or request

Comments

@jdnlp
Copy link

jdnlp commented Jan 14, 2025

Upon updating to 0.3.6 Build 8 from 0.3.5, I'm unable to obtain full, un-truncated dev. logs. This is regardless of my settings used, such as Verbose Logging, and Log Prompts and Responses.

I had built a small utility for a personal project relying on receiving data from a website that uses LM Studio as an optional API connection. The program works by parsing the saved, dated log files in (user.cache\lm-studio\server-logs). Prior to updating to 0.3.6, I was able to see the full, verbose developer/server log within the LM Studio application as well as in the saved, dated log files. However, after updating to 0.3.6, the information I need is removed due to truncation. I can't imagine that this is intentional, and so it seems as though the Verbose Logging setting has been disabled/subverted by the latest update somehow, even if it is enabled by the user.

To clarify, what happens in the server/dev log is that "... ..." appears to replace any large chunk of text. This is undesirable behavior.

I'll have to revert to 0.3.5 until this issue gets fixed, but I wanted to reach out and make this issue known.

@yagil
Copy link
Member

yagil commented Jan 14, 2025

Thanks @jdnlp are you able to include snippets of "good logs" and "bad logs"?

@jdnlp
Copy link
Author

jdnlp commented Jan 14, 2025

@yagil Of course. I used lorem ipsum to provide textual content for this example.

This is not the entire content of a log file, but rather the focus is on the part that matters for my application specifically, a snippet. I believe the example here explains the issue.

The ellipses shown are supposed to show "Truncated in logs" surrounded in less and greater than symbols, but they disappear here on github when I type them out due to formatting, I suppose.

The truncation in the logs occurs regardless of the Context Overflow settings chosen by the user such as "Truncate Middle", "Rolling Window" or "Stop at Limit".


Good example of log (0.3.5):

[2025-01-12 05:46:55][DEBUG] Received request: POST to /v1/chat/completions with body {
"model": "",
"temperature": 0.9,
"max_tokens": 500,
"stream": true,
"messages": [
{
"role": "system",
"content": "[Lorem ipsum odor amet, consectetuer adipiscing elit. Conubia molestie fusce phasellus id accumsan nascetur cras a fringilla. Eleifend consectetur fames dis tortor laoreet maecenas dui imperdiet. Dapibus non ipsum vestibulum rutrum ultricies fusce cursus semper. Vivamus amet proin ullamcorper pulvinar ad fringilla bibendum feugiat. Habitasse ut porta semper ac ultrices. Torquent magnis porta quam mus nascetur maximus semper. Per dui integer pharetra tortor parturient venenatis tristique morbi pulvinar.

Litora habitant massa dignissim; sollicitudin nunc et gravida. Rutrum nunc class consequat cubilia eu integer rhoncus id. Nostra metus bibendum dictum et lacinia. Laoreet in eu urna tristique quisque in class quisque. Pulvinar justo tortor congue enim iaculis tristique praesent. Curae malesuada condimentum dictum scelerisque blandit vivamus finibus faucibus? Taciti conubia tristique praesent lorem et. Phasellus turpis quam eros libero mauris. Pharetra porta habitant mus eget vitae habitasse.

Maximus hendrerit non aptent; praesent morbi praesent purus mollis. Risus himenaeos inceptos a nulla sit egestas pharetra diam. Dolor malesuada nascetur magna mattis mattis cras. Luctus per quisque ac imperdiet; lorem aenean cras natoque feugiat? Nibh nam porttitor id nascetur iaculis viverra ultrices ex. Nullam commodo facilisis tempor congue est mollis nisi malesuada ipsum. Faucibus nullam tempor sem porttitor amet a mus."


Bad example of log (0.3.6 build 8):

[2025-01-12 05:46:55][DEBUG] Received request: POST to /v1/chat/completions with body {
"model": "",
"temperature": 0.9,
"max_tokens": 500,
"stream": true,
"messages": [
{
"role": "system",
"content": "[Litora habitant massa dignissim; sollicitudin nunc et gra... ... facilisis tempor congue est mollis nisi malesuada ipsum."
},

@yagil
Copy link
Member

yagil commented Jan 14, 2025

Thanks @jdnlp this helps a lot, I get the issue now. A particular log format is notoriously not a stable API (as demonstrated here). A few questions for you:

  1. do you control the website that makes the request?
  2. what language is your utility written in?

@jdnlp
Copy link
Author

jdnlp commented Jan 14, 2025

@yagil Of course, my pleasure to help what I see as a great resource in LM Studio!

  1. I don't control the website, but the issue seems isolated to LM Studio because 0.3.5 does not produce this issue.

  2. I just use Python and process the data with regex essentially, it's really nothing fancy - I should note that this doesn't seem necessary as 0.3.6 already does that for me... but the only way to get the data I need with 0.3.6 is via the lms log stream command in the CLI. Then, I'd have to save that data to a text file myself, which seems counter-intuitive when LM Studio already produces dated logs for me. I figured I should reach out to find out what had happened.

@ognistik
Copy link

I'm having the same issue and looking for a way to downgrade to 0.3.5 because of it.

@yagil
Copy link
Member

yagil commented Jan 15, 2025

We'll add a way to turn this back on

@yagil yagil added the enhancement New feature or request label Jan 15, 2025
@sgamble1-wowcorp
Copy link

"Just-in-Time Model Loading" switch also isn't being respected. Possibly related, since it's in the same Settings drop down.

@yagil
Copy link
Member

yagil commented Jan 15, 2025

"Just-in-Time Model Loading" switch also isn't being respected. Possibly related, since it's in the same Settings drop down.

@sgamble1-wowcorp how are you determining that? I see it working. Please create a separate issue because it is in fact separate from this one

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

5 participants