Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Show HN: Graphene – GraphQL framework for Python (graphene-python.org)
106 points by syrusakbary on Jan 26, 2016 | hide | past | favorite | 25 comments


"A GraphQL query is a string interpreted by a server that returns data in a specified format" .... that is the most useless effing description of a thing. That describes basically EVERY non-binary network request.

Please guys, give us an explanation on the front page of what the hell it is (in a meaningful non-generic way) and why we would want to use it.


Awesome work Syrus! Super happy to see this library evolve to provide a pythonic wrapper around the core graphql library.

https://github.com/graphql-python/graphql-core


Thanks! Without your work at graphql-core this would not have been possible! ;)


I've been trying to work with this library. So far my conclusion is that it's heavily under-documented and the examples are almost too simple. I'm having lots of trouble understanding errors.


I'm sorry to hear that. Documentation is in progress :)

As a side note, a better approach for handling errors is coming to graphene soon


One thing I've been curious about with GraphQL and supporting libraries. All the examples I've seen have a hardcoded schema definition, which makes sense because they're just small examples. What if the schema itself is dynamic or stored in a db, with separate schema for every user of an API. Fetching the schema and loading it into the library on every request seems inefficient, although I suppose you could cache it, but I haven't seen any libraries or examples that address that problem.


What's an actual use case for this? It sounds pretty obscure, so it's not surprising you haven't seen any examples of it.


I was thinking of a user defined schema or ontology for a wiki, allowing the user to also create simple queries for their own data. The example data for a lot of GraphQL tools is Star Wars characters, movies and space-ships. If you look at the Star Wars wiki, it can get a lot more complicated, but it's not something you would want to hard code because new types and relationships are being added all the type. The simplicity of the schemas and queries in GraphQL, and the existence of UI tools out of the box, it seems a lot more approachable than something like WikiData or Semantic Mediawiki.


Can resolve functions get executed in parallel or sequentially? Likewise, is there a dataloader equivalent? These things are kind of essential for a performant GraphQL server. I obsessively watch the request waterfall that my graphql-js server executes to make sure it's fully exploiting parallel requests.


By default is executed sequentially, however supports parallel resolvers in fields using asyncio, gevent, or async def in Python 3.5+


Any chance of an example of how that might look/work?



Ah nice, i'll have a look into that a bit more. What's the graphql library that's being imported alongside graphene?

Can you see how dataloader might be implemented? The JavaScript version uses magic involving process ticks to aggregate promises within a single run loop, and batch them into a single operation. My knowledge of async in Python isn't deep enough to know whether something similar is possible.


Graphene is a more pythonic wrapper ontop of a lower-level library called graphql-core[1], which provides async execution using execution middlewares [2].

As far as how a data-loader is implemented, I have a few experiments doing it with gevent, but nothing prod ready. Essentially you can just use the `loop.run_callback` function to get the "next tick" semantics. I combined this along with gevent's AsyncResult to provide a pythonic coroutine based implementation of FB's datawrapper.

    [1] https://github.com/graphql-python/graphql-core
    [2] https://github.com/graphql-python/graphql-core/tree/master/graphql/core/execution/middlewares


One thing I'm wondering about graphql is how does it know when something changed in the db and should be re-queried. For example, I fetch some movies in my front-end webapp and they are cached. Then, later on, I query that movie again but something changed on the server, say the producer had a typo in their name. In that situation, it seems like graphql will only fetch the new fields I'm querying, and not what's already cached.

I understand it's a caching problem which is hard (I know that joke), but I'm wondering specifically how it's handled with a graph query.

The way I've been doing it is using a SQL structured on the front-end and downloading the main table/rows + changes. But graphql seems to be downloading and aggressively caching without keeping track of timestamp.


GraphQL doesn't specify anything to do with caching. If you run the same query twice, they'll be fully evaluated each time. You can (and probably should) implement some caching, but it's up to you, and therefore cache invalidation is up to you too.


Related: the meteor people announced they'll be working on this problem. http://info.meteor.com/blog/reactive-graphql


Snappy website. Is that on github as well? I'd like to have a poke around and see how it's made.


It's made with Gatsby (https://github.com/gatsbyjs/gatsby)


Thanks


Well this is just splendid!


Is there a way to define attributes as a function of other attributes? I poked through the documentation and tried to prototype it but couldn't get it to work. Am I missing something?


Hi dwiel, you mean fields inside types? (as class attributes?) If so, putting the attributes inside a function is usually for lazy resolving the fields.

Sometimes the types could be defined after the class definition. In this case, you can use strings for reference field types. Something like:

class User(graphene.ObjectType): friends = graphene.Field('self') # 'User' would work too.


Yeah I think that is what I mean. I want one class attribute to be a function of another class attribute. This may be naive but here is an example of what I want to be able to do:

  class User(graphene.ObjectType):
      name = graphene.String()
      initials = graphene.String()

      def resolve_name(self, args, info):
          return user_service.lookup_name(...)

      def resolve_initials(self, args, info):
          # How can I refer to the result of resolve_name above?
          return ''.join(part for part in name.split(' '))
Perhaps I'm not using graphene in the way it was intended to be used?


Usually you have a self._root object (in Django, for example this _root is the model instance) and then you use self.instance.name in both resolve_name and resolve_initials.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: