Wolfram Alpha Gets Its First Core Update After Launch
One thing that many webmasters and users do not like is that search engines keep them in the dark when they update their search engine, be it search engine algorithm changes or other updates. The Wolfram Alpha team seems to have felt the same way as they decided to release information about the first core update after the launch of the new search engine on their blog.
The blog lists the major changes of the core update mentioning in the end that about 1.1 million data values have been affected by the search engine update. Most of the changes listed in the blog post add new searches or information to the search engine. This includes for example the updates to some European countries like Slovakia, regions (like Wales), country borders (India, China..) but also more interesting comparisons (USA deficit vs. UK), combined time series plots of different quantities (population Germany gdp) and additional ways of searching for information and data entries.
- More comparisons of composite properties
- City-by-city handling of U.S. states with multiple timezones
- Additional probability computations for cards and coins
- Additional output for partitions of integers
- Improved linguistic handling for many foods
- Support for many less-common given names
- More “self-aware†questions answered
There is obviously still a fundamental different between the information that are provided by Wolfram Alpha and classic search engines like Google Search, Bing or Yahoo Search. The developers are however on a good way to fill a gap that most users were probably not aware of until they discovered that they could use Wolfram Alpha in a certain way to compute information that would take lots of manual work otherwise.
Still, the search engine is limited and it sometimes depends on the phrase if it understands the search query.
Advertisement
Doesn’t Windows 8 know that www. or http:// are passe ?
Well it is a bit difficulty to distinguish between name.com domains and files for instance.
I know a service made by google that is similar to Google bookmarks.
http://www.google.com/saved
@Ashwin–Thankful you delighted my comment; who knows how many “gamers” would have disagreed!
@Martin
The comments section under this very article (3 comments) is identical to the comments section found under the following article:
https://www.ghacks.net/2023/08/15/netflix-is-testing-game-streaming-on-tvs-and-computers/
Not sure what the issue is, but have seen this issue under some other articles recently but did not report it back then.
Omg a badge!!!
Some tangible reward lmao.
It sucks that redditors are going to love the fuck out of it too.
With the cloud, there is no such thing as unlimited storage or privacy. Stop relying on these tech scums. Purchase your own hardware and develop your own solutions.
This is a certified reddit cringe moment. Hilarious how the article’s author tries to dress it up like it’s anything more than a png for doing the reddit corporation’s moderation work for free (or for bribes from companies and political groups)
Almost al unlmited services have a real limit.
And this comment is written on the dropbox article from August 25, 2023.
First comment > @ilev said on August 4, 2012 at 7:53 pm
For the God’s sake, fix the comments soon please! :[
Yes. Please. Fix the comments.
With Google Chrome, it’s only been 1,500 for some time now.
Anyone who wants to force me in such a way into buying something that I can get elsewhere for free will certainly never see a single dime from my side. I don’t even know how stupid their marketing department is to impose these limits on users instead of offering a valuable product to the paying faction. But they don’t. Even if you pay, you get something that is also available for free elsewhere.
The algorithm has also become less and less savvy in terms of e.g. English/German translations. It used to be that the bot could sort of sense what you were trying to say and put it into different colloquialisms, which was even fun because it was like, “I know what you’re trying to say here, how about…” Now it’s in parts too stupid to translate the simplest sentences correctly, and the suggestions it makes are at times as moronic as those made by Google Translations.
If this is a deep-learning AI that learns from users’ translations and the phrases they choose most often – which, by the way, is a valuable, moneys worthwhile contribution of every free user to this project: They invest their time and texts, thereby providing the necessary data for the AI to do the thing as nicely as they brag about it in the first place – alas, the more unprofessional users discovered the translator, the worse the language of this deep-learning bot has become, the greater the aggregate of linguistically illiterate users has become, and the worse the language of this deep-learning bot has become, as it now learns the drivel of every Tom, Dick and Harry out there, which is why I now get their Mickey Mouse language as suggestions: the inane language of people who can barely spell the alphabet, it seems.
And as a thank you for our time and effort in helping them and their AI learn, they’ve lowered the limit from what was once 5,000 to now 1,500…? A big “fuck off” from here for that! Not a brass farthing from me for this attitude and behaviour, not in a hundred years.