BADUK PROBLEMS PDF

Same in baduk, often we want to be a strong player but we don’t work enough to read correctly, to see what is really going to happen. To use this problems. Basically, I don’t provide answers to problems because you will eventually find the answer after you try times, please let me know the problem by E-mail. Improve your Go game (weiqi, baduk) with Tsumego Pro and its large collection of tsumego problems! Each go problem contains all valid answers and a lot of.

Author: Gulrajas Taukree
Country: Sierra Leone
Language: English (Spanish)
Genre: Literature
Published (Last): 17 March 2007
Pages: 140
PDF File Size: 17.66 Mb
ePub File Size: 4.96 Mb
ISBN: 334-4-58766-218-4
Downloads: 83536
Price: Free* [*Free Regsitration Required]
Uploader: Sajinn

We have discussions, problens problems, game reviews, news, events, tournaments, lessons and more! Submit a game for review! Neural-net-extracted Go problems from pro games self. I just finished a new website today that I hope people will find interesting!

I trained neural nets to model players of different ranks, and then took a large database of pro games and used them to extract positions when the next move would likely be instinctual for a typical human pro but less instinctual for a typical weaker player.

Also feel free to PM me if you’re having trouble getting it to load, getting things to work prroblems on all browsers is hard. Among other things, it seems to have a high density of “basic shape problems” from local open-space fights. I’m excited to see what people make of it. Updated my GitHub repo with the training code that I used to train the underlying neural net https: I have some other ideas baking in the back of my head too about other ways to extract interesting Go problems, which I might experiment with eventually.

I want to learn a bit more about how value nets work first though. What a fun idea! I found the problems quite pleasant, although even the 3d level seemed generally fairly easy. It seemed like often there would be several obvious shape moves that ‘might have been the pro move in proboems different situation’, but I’m not sure if this was very sensitive to the local position, which could make them wrong.

Some of the problems picked out were quite interesting, like choices in the taisha being detected as complicated positions.

I also got a fun mirror go position, which was easier than the system probably realised. You must be stronger or have better shape instincts than I do. I’m close to that rank and find that I get a reasonable fraction of the 3d ones wrong, and in often ways where once I see the answer, it’s clear in retrospect why probpems game move was better.

I also do find many of them to be surprisingly easy though. Although I suspect on some of them if I think more I’d find why it’s not obvious why the obvious move is actually correct I keep wondering if it’s really just that simple often enough in actual games that I think when the pressure is off we tend to go with that “gut” move, and these whole board problems highlight that.

In other words, in games we tend to talk ourselves out of solid, problrms moves to play that thing we often imagine in paranoia to be urgent. Sometimes I think it’s exactly what you said. But also, there’s a strong bias in what you see here due to the problem generation method – you only see the cases where the neural net thought game move was unique and clear for a pro – you don’t problemx all the times when the pro did make a surprising hard-to-predict move And sometimes the neural net is simply wrong that a weaker player would mess the problem up.

I think a nontrivial number of the easy problems in the 3d set actually fall into this category. Probably I could have trained it more, but the GPU time is expensive. I found at least one case where it was simply wrong, and the wrongness differed by the rotation of the board. As has been seen with Leela Zero, things that are hard for the neural net often come along with instability in the prediction by rotation of the board, although here the hardness wasn’t the prediction of the pro move, rather it’s the prediction of bavuk a weaker player might or might not differ from the pro!

  IGRA ZAVODJENJA PDF

Right now, I’m regenerating the problems and seeing if averaging the rotations reduces some badul this. Maybe it’s also possible to throw out problems where different rotations give very different answers. Regenerated the problems, although I think it didn’t affect the prediction quality that much. After some fiddling, possibly an outlier several percent of the “wrong-difficulty” problems are fixed now, so things are ever so slightly more consistent than before.

Anyways, I think I’ll stop fiddling with it now. Could you explain a bit how neural networks are involved? What were the training set and labels you used? The neural net receives the player rank and the server! This is really interesting.

Are you planning to write a paper explaining it? I’m curious how you can use a NN to filter for good problems. For example, could you do this the other way round? Review amateur games and find bad moves that are typical for the rank of the player, but seem bad for players who are maybe 5 stones stronger and pro players too.

This might help amateur players to fix their bad habits. I like this idea, seems worth trying. A risk of using a pure policy net is that sometimes the right move really is one the neural net didn’t rate highly. In pro games, you have confirmation of the form of “well the pro made played that move”, so even though pros make mistakes too, usually the move will be reasonable. In amateur games, you don’t have that. I bet this could be make to work if you augment the rank-based neural net which understands how the amateur might play with the support of a strong bot like Leela Zero which can confirm which moves are good and bad.

It would be super cool if there was a program that would rank moves during a game. The logic might be somewhat similar. Play on your first move it might say 9p, play something unconventional it might say 20k. Alternatively it could suggest better moves on replay.

Challenging Problems For Days of Study (Wei Qi, Weiqi, Baduk, Paduk, Igo)

I love this as an alternative to typical Tsumego! Even if the “correct” move is not always guaranteed to be the one true answer, there’s always something to learn from it and the game continuation.

It would be really nice if you could add something where you could animate the numbered moves, would make it much easier to see what is happening rather than playing the moves in your head. Animation would be a bit of work, so I’m probably not going to get to this, although I agree it would be nice: At least right now it’s no worse than numbered diagrams you see everywhere else – go books books, sensei’s library, etc.

This is true – however, problems on the internet frequently have move animations, i. Books don’t because well, they’re books. If you could also make it problfms what the expected answers for the problms the problem is, say 3d or 1d for example pdoblems be.

Thanks for the suggestion. It might interest you to know that for dan players, usually the expected amateur move by far is precisely the pro move. What happen in such cases is as you go down the ranks, the probability mass on the pro move expands out into a fuzzy cloud that starts to blur out over “plausible shape” moves nearby, even though the clear plurality move problemms still the pro move.

  KOERNER RM DESIGNING WITH GEOSYNTHETICS PDF

Been trucking through the 1d problems and it turns out I’m pretty problema at picking out single moves, it’s the continuations I seem to struggle with! Glad you like it. Yeah, continuations are hard. Probably due to what you stated, that there’s usually that one move or at least an “obvious” move. What is up with the algorithm though? Probems just clicked on the 14k set for giggles and struggled with abduk first two that came up!

Clearly there’s a major bit of shape knowledge there that separates many 18k from 10k, so it’s a good problem to put in the 14k set. But as you get stronger, you start noticing other complexities in the position.

Taking those into account, problms move to play actually doesn’t become more clear, and occasionally can actually become less clear. Pro games are messy and deep, so many problems are like that. In pro games, when there’s a good local shape for even a beginner to learn, it doesn’t necessarily mean that the position is simple for someone much stronger.

I’m a bot, bleepbloop. Someone has linked to this thread from another place on reddit:.

English Books

If you follow any of the above links, please respect the rules of reddit and don’t vote in the other threads. Maybe add a flag button, so that wrong problems can slowly be filtered out? And also this could be a good source of getting data which problems have some kind of problem. Thanks for the suggestion!

I pondered that idea for a bit – one of the tricky parts is that right now the site is entirely stateless, which makes it really easy to maintain and is a property that isn’t nice to probles up.

What I do think I might work is to filter the problems using a strong bot like Leela Zero, to try to catch errors and bad problems.

bbaduk The filtering could be done all offline without introducing any complexity to the site. I don’t plan to do this right away as I want to explore some other ideas first, and renting the GPU time to do the filtering would not be cheap, but maybe in a couple of months I might visit this idea.

How many problems are on your site? Will there be an option to download multiple problems? What training set s did you use? And would you also provide the trained networks? See other reddit comments for how I made these problems. Or this life in 19 thread. Getting the website up was a lot of work, I’m not an experienced web dev and had to learn a lot as I went. So I want to take a little break now, but perhaps the weekend after next I’ll clean things up and update my GitHub https: You might have to be a bit dev-oriented yourself if you want to do anything with lroblems.

My architecture is a bit different than, say, Leela Zero. It has some new kinds of layers that in my experiments improved the policy prediction accuracy quite a bit over the plain AlphaGo-style architecture so it might not simply plug in to other things. How did you probelms the non-pro games? Prohlems was looking around, but didn’t really find anything except pro games. There is an OGS api, but it is throttled and would take forever to get the games.

This was a good starting point: